Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

Tom Malinowski Responds on Lethal Autonomous Systems: Part II

Benjamin Wittes
Thursday, December 6, 2012, 7:56 AM

After responding yesterday to Matt and Ken on lethal autonomous robots, Human Rights Watch's Tom Malinowski now takes aim at my critique of his group's report on robots with autonomous lethal capability.

Published by The Lawfare Institute
in Cooperation With
Brookings

After responding yesterday to Matt and Ken on lethal autonomous robots, Human Rights Watch's Tom Malinowski now takes aim at my critique of his group's report on robots with autonomous lethal capability. I post his response along with some thoughts of my own below it:

Having responded to Matt and Ken’s critique of Human Rights Watch’s report on killer robots, let me address some of Ben’s additional comments.

Ben’s main argument seems to be that greater automation in military technology offers potential advantages as well as disadvantages from the standpoint of protecting civilians. It would make more sense, therefore, to wait and see how the technology develops, rather than prohibiting preemptively systems that could in the end perform better than people.

Ben is right that there are civilian protection benefits to some aspects of automation. He doesn’t actually describe what those benefits are, but let me state a few: Machine soldiers will never act out of anger or panic or fatigue, which are common causes of war crimes. Since they do not fear for their lives, they can more closely approach and better observe a target, to make sure that it is lawful and to determine if civilians may be harmed. Robots also would be able to process information from multiple sources much more quickly than human beings, which could be helpful in making correct targeting decisions in high pressure battlefield situations.

But what Ben fails to consider is that all of these benefits can be had by deploying remotely controlled robotic weapons, which would be automated in many or most respects, but would still keep a “man in the loop” for decisions about firing on human beings. Today’s Predator drones, when appropriately used, have all the advantages I describe above, but a human being must still decide when to pull the trigger. The same will be true of future generations of airborne and ground based weapons operated remotely by human controllers. The unmanned submarines that Ben envisions would work just as well if, once they autonomously identify their target, a human controller must approve the launch of their torpedo (a safeguard that I imagine the Pentagon would insist on in any foreseeable future, given the danger that malfunction or malware could result in the sinking of a civilian vessel).

Full automation, with no man in the loop, might have military advantages, in that machines can act more quickly if they don’t have to wait for a human being to make a decision. But it’s hard to imagine what additional benefits removing the human controller would give us from the stand point of civilian protection. Once the robot has done what it does best – identifying a target, calculating the likely effects of possible courses of action, and so on -- why would we ever not want the added check of a human being exercising judgment?

A couple of additional notes:

First, Ben suggests that HRW supports “categorically barring the development of robots with automated firing power,” a position that, he notes, might preclude automated missile defense systems like a possible future version of Israel’s Iron Dome. In fact, we have not argued for banning automated systems designed to fire on unmanned objects like incoming missiles. This debate is about full “lethal autonomy” – whether robots should be able to decide whether to launch attacks that kill people.

Second, Ben says that HRW makes “a lot of assumptions” about whether artificial intelligence will ever advance to the point where machines might be able to apply the laws of war, and that we “reject without much examination the possibility that fully autonomous robots” might sometimes do so better than human beings. I can assure Ben that HRW researchers are not allowed to make assumptions or to draw conclusions without examination. The authors of our report on lethal autonomy interviewed many of the leading experts in robotics and computer science and comprehensively examined the literature in the field. While the technical experts do not all hold the same view, most express great doubts that the technology will ever allow for subtle and adaptive reasoning in complex battlefield environments. Sorry Ben, but there is an enormous gulf between robots that can distinguish between different colored pills, as cool as those are, and robots that can distinguish combatants from civilians well enough to be trusted to kill them.

And even if advances in artificial intelligence someday allow us to bridge that gulf, it’s also fairly clear that nations will be able to build battlefield-ready autonomous systems far sooner. The deployment of such systems by US competitors will put enormous pressure on the US to do the same, whether the technology has satisfied our ethical and legal concerns or not. We could, as Ben suggests, wait till then to decide whether to prohibit or regulate full lethal autonomy. But realistically, once these weapons enter into the arsenals and strategies of major military powers, and industries grow to produce them, it will be difficult, if not impossible, to turn back the clock. The choices we make today, therefore, are ones we had better be certain we can live with.

Let's start with the good news: Tom's piece somewhat narrows the gap between us by declaring that Human Rights Watch does not, in fact, call for the banning of fully automated defensive systems "designed to fire on unmanned objects like incoming missiles." This position is welcome, though it might surprise readers of the report, who had read this recommendation:

Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.

States should preemptively ban fully autonomous weapons because of the threat these kinds of robots would pose to civilians during times of war. A prohibition would ensure that firing decisions are made by humans, who possess the ability to interpret targets’ actions more accurately, have better capacity for judging complex situations, and possess empathy that can lead to acts of mercy. Preserving human involvement in the decisionmaking loop would also make it easier to identify an individual to hold accountable for any unlawful acts that occur from the use of a robotic weapon, thus increasing deterrence and allowing for retribution.

This prohibition should apply to robotic weapons that can make the choice to use lethal force without human input or supervision. It should also apply to weapons with such limited human involvement in targeting decisions that humans are effectively out of the loop. For example, a human may not have enough time to override a computer’s decision to fire on a target, or a single human operator may not be able to maintain adequate oversight of a swarm of dozens of unmanned aircraft. Some on-the-loop weapons could prove as dangerous to civilians as out-of-the-loop ones. Further study will be required to determine where to draw the line between acceptable and unacceptable autonomy for weaponized robots. (pp. 46-47)

The report is, in fact, decidedly less careful about distinguishing between autonomous robots with human targets and those with non-human targets than Tom and I would both hope robots would be in distinguishing between civilians and combatants. At times, the authors distinguish. But at the key moments in the report, they sweep very broadly indeed. What's more, Tom's contention here that Human Rights Watch has no problem with missile defense systems like Israel's Iron Dome is a little hard to reconcile with a report that specifically describes Iron Dome as a kind of precursor to the very sort of autonomy we should fear. The discussion appears in a section entitled "Automatic Defense Systems," which describes these systems as "one step on the road to autonomy." The anxiety about this sort of defensive system is palpable:

As weapons that operate with limited intervention from humans, automatic weapons defense systems warrant further study. On the one hand, they seem to present less danger to civilians because they are stationary and defensive weapons that are designed to destroy munitions, not launch offensive attacks. On the other hand, commentators have questioned the effectiveness of the human supervision in the C-RAM and other automatic weapons defense systems. Writing about the C-RAM, Singer notes, “The human is certainly part of the decision making but mainly in the initial programming of the robot. During the actual operation of the machine, the operator really only exercises veto power, and a decision to override a robot’s decision must be made in only half a second, with few willing to challenge what they view as the better judgment of the machine.” When faced with such a situation, people often experience “automation bias,” which is “the tendency to trust an automated system, in spite of evidence that the system is unreliable, or wrong in a particular case.” In addition, automatic weapons defense systems have the potential to endanger civilians when used in populated areas. For example, even the successful destruction of an incoming threat can produce shrapnel that causes civilian casualties. Thus these systems raise concerns about the protection of civilians that full autonomy would only magnify. (pp. 12-13)

If Human Rights Watch really has no problem with purely defensive robotic autonomy, that considerably narrows the difference between us---but Tom should probably notify the authors of the group's report.

Even if Human Rights Watch climbs down on this point, this would still leave the problem of autonomous robots used against human targets---a matter on which Tom stands by the report's call for a categorical ban. And Tom here asks a fair question: While it's easy to see the potential military benefits of autonomous firing power, he writes, "it’s hard to imagine what additional benefits removing the human controller would give us from the stand point of civilian protection."

Really? I'm not sure of that all. Imagine a robot designed for targeted strikes against individual targets. The target, a senior terrorist figure, has already been selected, but he's living in a compound with large numbers of civilians---wives and kids, say---with whom he is interacting on an ongoing basis. Precisely to discourage drone strikes, he keeps himself close to these and other civilians. Now imagine that our new robot has a few advances---none of them unthinkable, in my view---over current technology. First, it is very small and quiet and thus hovers very low, allowing it to strike incredibly quickly. Second, it has very long loiter time, so it can watch the target for long periods. Third, its on-board weapon has an extremely small blast radius; maybe it crashes into the target's head and explodes, but it's specifically designed not to destroy the building (and the other people in the building) in doing so. And fourth---and critically---to make sure that the the strike does not kill the civilians in the compound, the robot decides autonomously when to attack, exploiting split-second calculations about the target's distance from the civilians in relation to the weapon's blast radius. If you think this robot sounds fantastical, consider this one---which already exists.

To use another example, what if we could program a Hellfire missile to abort its mission and slam harmlessly into the ground if, between launch and impact, it calculates a greater likelihood of civilian casualties than the humans who launched it deemed acceptable? Such missile would be, in effect, making thousands of autonomous firing (or not firing---which is really the same thing) decisions during the course of its flight.

I am not sure why Human Rights Watch would want to prohibit the "development" of such systems as a matter of IHL. To put the matter simply, I find it highly plausible---though by no means certain---that robotic autonomy could yield civilian protection benefits in a number of areas, and I'm modest enough in my sense of my own ability to predict future technological developments that I have no illusions that I'm either exhausting the opportunities or necessarily identifying the right ones.

This is really the most fundamental disagreement I have with Human Rights Watch, which wants to ban development of technologies out of the conviction that they will likely violate existing IHL. This makes no sense to me, as the use of lethal autonomous robots that can't observe IHL principles is already banned---for the simple reason that states are always obliged to respect these principles. States already either have an obligation to do legal reviews of weapons systems or (in the case of the United States) do them without being signatories to Protocol I, and they certainly have an obligation not to use weapons in a fashion that fails to observe the principles of distinction and proportionality. So while I am in full agreement that autonomous robots that can't pass such a legal review should not be deployed, I can't quite see why current law does not provide adequate assurances. Put another way, why should we create an international treaty banning technological development on the speculative hypothesis that a robot could never pass such a legal review? Unless, that is, the goal is to keep humans in the loop even if the robots can observe IHL principles better without us.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare