Lawfare News

Similar Ethical Dilemmas for Autonomous Weapon Systems and Autonomous Self-Driving Cars

Kenneth Anderson, Matthew Waxman
Friday, November 6, 2015, 3:48 PM

In writing about autonomous weapon systems (AWS) and the law of armed conflict, we have several times observed the similarities between programming AWS and programming other kinds of autonomous technologies, as well as the similarities of ethical issues arising in each. Machine decision-making is gradually being deployed in emerging technologies as different as self-driving cars and highly automated aircraft, and many more will join them in such areas as elder-care machines and robotic surgery.

Published by The Lawfare Institute
in Cooperation With
Brookings

In writing about autonomous weapon systems (AWS) and the law of armed conflict, we have several times observed the similarities between programming AWS and programming other kinds of autonomous technologies, as well as the similarities of ethical issues arising in each. Machine decision-making is gradually being deployed in emerging technologies as different as self-driving cars and highly automated aircraft, and many more will join them in such areas as elder-care machines and robotic surgery. These include decisions that involve potentially lethal consequences and decisions to engage in potentially lethal behaviors. As we put the point in a paper last year (co-authored with Daniel Reisner):

“Development of many of the enabling technologies of autonomous weapons systems—artificial intelligence and robotics, for example— are being driven by private industry for many commercial and societally- beneficial purposes (consider self-driving cars, surgical robots, and so on). They are developing and proliferating rapidly, independent of military demand and investment. Such civilian automated systems are already making daily decisions that have potential life and death consequences, such as aircraft landing systems. While most people are generally aware that these types of systems are highly automated (or even autonomous for some functions), and have become wholly comfortable with their use, relatively little public discourse has addressed the increasing decision-making role of autonomous systems in potentially life-threatening situations.”

A recent online article in MIT’s Technology Review raises the question of how self-driving cars used on roads in ordinary society ought to be programmed to be able to deal with situations in which, for example, the choices for a self-driving car are collide with a school bus (probably killing many children) or run the self-driving car into a wall (probably killing the car’s occupants). Such scenarios have been raised before - by Gary Marcus several years ago in the New Yorker, for example - but the issues are gaining greater practical traction as carmakers (Tesla, for example, and not just Google) gradually push their autopilot functions into new territory that begin to cross into genuinely “autonomous” driving.

The Technology Review article is titled, provocatively, “Why Self-Driving Cars Must Be Programmed to Kill.” It offers up the now-classic ethical dilemma - a sort-of technologically ramped-up version of the famous “trolley car” hypotheticals in moral philosophy - how should the car “be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”

The article goes on to observe that answers given to these ethical questions could have a “big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?” It goes on to describe recent studies in “experimental ethics” aimed at assessing the public’s moral intuitions about such scenarios as the pre-programmed sacrifice of the self-driving car and its occupants.

One striking feature of these ethical dilemmas is that they are new (at least in the driving context) because they offer the possibility that the machine could make a decision according to an ethical calculus that a human would be quite unlikely to be able to perform in the moment of an accident. Or unwilling; tort law in the context of humans driving automobiles, for example, does not impose an affirmative duty of self-sacrifice on a human driver in order to save the children. One cannot be sued in tort for failing to drive one’s car into a wall and likely kill oneself in order to avoid harming the children, however virtuous such an act might be. Programming built into a self-driving car’s computer, however, may allow for such life-and-death decisions to be made in advance.

Some ethical dilemmas that trouble many with respect to AWS are thus not necessarily unique to the weapons context. Other autonomous systems will have to address (even by simply ignoring the ethical issue) questions of if and when an autonomous system can be programmed to take actions likely to kill, including scenarios of killing the few in order to save the many. Our view continues to be that to the extent such automation, autonomous, and robotics technologies come to be widely accepted as “more effective, safe and reliable than human judgment in many non-military realms, their use will almost certainly migrate into military ones. Indeed, future generations that perhaps come to routinely trust the computerized judgments of self-driving vehicles are likely to demand, as a moral matter, that such technologies be used to reduce the harms of war. It is largely a question of whether such systems work or not, and how well.”


Topics:
Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."
Matthew Waxman is a law professor at Columbia Law School, where he chairs the National Security Law Program. He also previously co-chaired the Cybersecurity Center at Columbia University's Data Science Institute, and he is Adjunct Senior Fellow for Law and Foreign Policy at the Council on Foreign Relations. He previously served in senior policy positions at the State Department, Defense Department, and National Security Council. After graduating from Yale Law School, he clerked for Judge Joel M. Flaum of the U.S. Court of Appeals and Supreme Court Justice David H. Souter.

Subscribe to Lawfare