Cybersecurity & Tech

Does Human Rights Watch Prefer Disproportionate and Indiscriminate Humans to Discriminating and Proportionate Robots?

Benjamin Wittes
Saturday, December 1, 2012, 10:19 AM
I have now read the Human Rights Watch report, "Losing Humanity: The Case Against Killer Robots"---which calls for a preemptive ban on "fully autonomous" weapons systems and which Matt and Ken critiqued here. I agree with their critique, but I find myself in rather more fervent opposition to the position Human Rights Watch has staked out here than Ken and Matt express in their very temperate post.

Published by The Lawfare Institute
in Cooperation With
Brookings

I have now read the Human Rights Watch report, "Losing Humanity: The Case Against Killer Robots"---which calls for a preemptive ban on "fully autonomous" weapons systems and which Matt and Ken critiqued here. I agree with their critique, but I find myself in rather more fervent opposition to the position Human Rights Watch has staked out here than Ken and Matt express in their very temperate post. I want, in this post, somewhat less temperately, to lay bare one of the potential consequences to the laws of war of taking seriously Human Rights Watch's call for a preemptive ban on fully autonomous weapons: An erosion of the principles of distinction and proportionality. For those who have read the report, this claim may seem a bit jarring, for it precisely inverts the report's argument---which is that fully autonomous robotic weapons systems would be unable to observe those very principles:
robots with complete autonomy would be incapable of meeting international humanitarian law standards. The rules of distinction, proportionality, and military necessity are especially important tools for protecting civilians from the effects of war, and fully autonomous weapons would not be able to abide by those rules.
But to reach this conclusion, the report's authors make a lot of assumptions about the technology that may well prove wrong. Most importantly, they reject without much examination the possibility that fully autonomous robots might, in some environments and for some situations, distinguish military targets far better and more accurately than humans can. To call for a per se ban on autonomous weapons is to insist as a matter of IHL on preserving a minimum level of human error in targeting. That is defensible only if one is certain that the baseline level of possible robotic error in civilian protection exceeds that baseline level of human error. I am not at all certain of that, in all imaginable sets of targeting situations, and Human Rights Watch shouldn't be either. The Human Rights Watch report, published with the International Human Rights Clinic at Harvard Law School, suffers at the outset from a certain confusion about what "autonomy" really means. As Matt and Ken point out, autonomy isn't a binary thing. It's a spectrum. As MIT professor Missy Cummings explains in this podcast, there are many different levels at which a system can operate autonomously---a subject that William Marra and Sonia McNeil explore in this paper and which Ken and Matt take on in this recent article in Policy Review. What Human Rights Watch is really objecting to here is something less than full autonomy---which would involve the ability on the part of the robot to task itself on a mission. Rather, the objection is to a form of automation of a task: the task of deciding whether or not to use lethal force in a specific high-pressure situation. To argue for a categorical preemptive ban on automating this function, you have to be convinced that human involvement is preferable to automation in all applications. For if even a narrow set of applications exists in which robots would protect civilians better than humans do, a per se ban would lock in a level of collateral damage that exceeds what is technologically necessary. In other words, if one believes that automation might in some instances erode civilian protection and in other instances enhance it, one should consider instead the development and deployment of automated technologies in those instances in which they would perform better than people and not in those instances in which they would make things worse. This is hardly a radical concept. Many weapons systems are lawful for some purposes and unlawful for others. But caution and care as to what situations might favor or disfavor autonomy is not what Human Rights Watch is urging here. Its position is that automated firing power should be banned in all situations. So why exactly is Human Rights Watch certain that robotic technologies---still in their infancy---can never protect civilians better than humans in any circumstances of armed conflict? Here's the argument, using a few extended quotations from the report itself:
An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity and might contravene the Martens Clause. Even strong proponents of fully autonomous weapons have acknowledged that finding ways to meet those rules of international humanitarian law are “outstanding issues” and that the challenge of distinguishing a soldier from a civilian is one of several “daunting problems.” Full autonomy would strip civilians of protections from the effects of war that are guaranteed under the law.
Robots with automated firing power would not be able to observe the principle of distinction, Human Rights Watch contends, because they,
would not have the ability to sense or interpret the difference between soldiers and civilians, especially in contemporary combat environments. Changes in the character of armed conflict over the past several decades, from state-tostate warfare to asymmetric conflicts characterized by urban battles fought among civilian populations, have made distinguishing between legitimate targets and noncombatants increasingly difficult. States likely to field autonomous weapons first---the United States, Israel, and European countries---have been fighting predominately counterinsurgency and unconventional wars in recent years. In these conflicts, combatants often do not wear uniforms or insignia. Instead they seek to blend in with the civilian population and are frequently identified by their conduct, or their “direct participation in hostilities.”
Robots "might not have adequate sensors" to distinguish lawful from unlawful targets in such circumstances, the report notes. And even if they did, they "would not possess human qualities necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets." These qualities, like empathy and judgment, Human Rights Watch argues, are key:
For example, a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals. The former would hold fire, and the latter might launch an attack. Technological fixes could not give fully autonomous weapons the ability to relate to and understand humans that is needed to pick up on such cues.
The group makes a similar argument about proportionality:
Determining the proportionality of a military operation depends heavily on context. The legally compliant response in one situation could change considerably by slightly altering the facts. According to the US Air Force, “[p]roportionality in attack is an inherently subjective determination that will be resolved on a case-by-case basis.” It is highly unlikely that a robot could be pre-programmed to handle the infinite number of scenarios it might face so it would have to interpret a situation in real time. . . . Those who interpret international humanitarian law in complicated and shifting scenarios consistently invoke human judgment, rather than the automatic decision making characteristic of a computer. The authoritative ICRC commentary states that the proportionality test is subjective, allows for a “fairly broad margin of judgment,” and “must above all be a question of common sense and good faith for military commanders.” International courts, armed forces, and others have adopted a “reasonable military commander” standard. The International Criminal Tribunal for the Former Yugoslavia, for example, wrote, “In determining whether an attack was proportionate it is necessary to examine whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.” The test requires more than a balancing of quantitative data, and a robot could not be programmed to duplicate the psychological processes in human judgment that are necessary to assess proportionality.
Underlying it all is a deep faith in human judgment, relative to the judgment of machines:
Proponents of fully autonomous weapons suggest that the absence of human emotions is a key advantage, yet they fail adequately to consider the downsides. Proponents emphasize, for example, that robots are immune from emotional factors, such as fear and rage, that can cloud judgment, distract humans from their military missions, or lead to attacks on civilians. They also note that robots can be programmed to act without concern for their own survival and thus can sacrifice themselves for a mission without reservations. Such observations have some merit, and these characteristics accrue to both a robot’s military utility and its humanitarian benefits. Human emotions, however, also provide one of the best safeguards against killing civilians, and a lack of emotion can make killing easier. . . . Whatever their military training, human soldiers retain the possibility of emotionally identifying with civilians, “an important part of the empathy that is central to compassion.” Robots cannot identify with humans, which means that they are unable to show compassion, a powerful check on the willingness to kill.
To put the matter simply, Human Rights Watch favors a per se preemptive ban on automated weapons because it can envision certain situations in which robotic technologies would distinguish civilians from combatants less well than people do and because robots might be less apt to inflect their responses with human emotions like compassion, empathy, and mercy. I agree with Ken and Matt's objection to this position: that Human Rights Watch is way too confident that it knows what will and will not be technologically possible some day. But there's another fundamental problem with the group's position: Not all firing decisions take place in environments remotely as hazy as the sort of COIN situations Human Rights Watch uses in most of its examples. Let's assume for purposes of argument that Human Rights Watch is entirely correct that a robot will never be able to surpass human decision-making in the situation in which a fearful mother is trying to protect her two children with toy guns near a soldier. That does not suggest to me that, say, an automated underseas weapon system that identifies, seeks out, and destroys enemy military submarines in wartime should be unthinkable. Rather, it's a plausible hypothesis that the universe of civilian submarines that look like military submarines is so small that the risks to civilians of such a system are near zero. Nor does it suggest to me that a fully-automated missile defense system should be out of bounds. What if the human involvement in systems like Israel's Iron Dome merely encumbers speed and what if the robots could protect civilians better if left unsupervised? It does not suggest to me even that a battlefield robot that---in a situation of state-to-state warfare in a hot combat zone---targets soldiers in enemy uniforms approaching forward U.S. positions should be prohibited. Consider this last example for a moment. In this video, which I posted a few weeks ago, a robot sorts pills of different colors into bottles with an accuracy and speed no human could match:

This is off-the-shelf commercial technology available today for manufacturers. It distinguishes red pills from white and yellow ones really quickly. It takes action based on that information---and it doesn't make mistakes. Is it really so unthinkable that, a few decades from now---or even sooner---robots might similarly distinguish uniformed enemies from civilians, or people carrying certain weapons from others, and might do so much faster, much more reliably, and from much greater distance than people can? Is it really so unthinkable that the robotic error rate in distinguishing combatant from civilians in some applications might be lower without human involvement? And if it's not unthinkable, then do we not risk undermining the principle of distinction by categorically barring the development of robots with automated firing power? Don't get me wrong: I am not arguing for such robots. Like Ken and Matt, I am entirely agnostic about whether and when automated firing power will or will not protect civilians better than human judgment does. I'm merely arguing against per se opposition to such automation of the sort that would give rise to a preemptive international treaty. It seems to me, rather, that the principle of distinction requires a certain neutrality about the development of future technologies in the face of uncertainty. It ultimately may require that humans stay on the loop. But it may, in some instances, require the opposite. And I, for one, would not bet against the possibility that for some military applications, we will some day come to see mere human judgment as guaranteeing an unacceptable level of indiscriminate and disproportionate violence.

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare