Cybersecurity & Tech Lawfare News Surveillance & Privacy

War Machines: Artificial Intelligence in Conflict

Stephanie Carvin
Thursday, April 26, 2018, 3:47 PM

PDF Version

A review of Paul Scharre’s “Army of None: Autonomous Weapons and the Future of War” (W.W. Norton, 2018).

***

Published by The Lawfare Institute
in Cooperation With
Brookings

PDF Version

A review of Paul Scharre’s “Army of None: Autonomous Weapons and the Future of War” (W.W. Norton, 2018).

***

Having invented the first machine gun, Richard John Gatling explained (or at least justified) his invention in a letter to a friend in 1877: With such a machine, it would be possible to replace 100 men with rifles on the battlefield, greatly reducing the number of men injured or killed. This sentiment, replacing soldiers—or at least protecting them from harm to the greatest extent possible through the inventions of science and technology—has been a thoroughly American ambition since the Civil War. And now, with developments in computing, artificial intelligence and robotics, it may soon be possible to replace soldiers entirely.

Only this time America is not alone and may not even be in the lead. Many countries in the world today, including Russia and China, are believed to be developing weapons that will have the ability to operate autonomously—discover a target, make the decision to engage and then attack, without human intervention. To anyone paying even cursory attention to this issue, the legal, ethical, and political challenges of this are profound. Wars that increasingly involve (or are even between) intelligent machines consisting of swarms, cyber attacks, and robots will upend many of our current understandings of warfare, including what constitutes a weapon or a soldier, responsibility and accountability in warfare, and perhaps what even constitutes a conflict. We will need answers to old questions recast by new technologies.

Paul Scharre—former Army Ranger, adviser to the Obama administration on artificial intelligence, and currently head of an emerging technologies program at the Center for a New American Security—has written an ambitious and fascinating book that will appeal to many and help them to start navigating many of these challenges. Drawing upon the latest in technological developments, just war theory, his own impressive military and policy background and the laws of war, “Army of None: Autonomous Weapons and the Future of War,” delivers what will likely be the most important general-audience book on this topic for at least the next decade.

A core strength of the book is its sheer scope. Scharre takes a very broad view of the national security challenges posed by AI and autonomous systems—not just “weapons” in the narrow legal sense, but the impacts of these emerging technologies across military systems. “Army of None” is not limited to talking about Terminator-style robots (although the eponymous film makes a cameo in many of the chapters), but rather emphasizes systems themselves as being revolutionized. In this sense, it is not just the battlefield that will be impacted, but intelligence, cyber-warfare and the speed at which military and all national security applications occur.

The first few chapters discuss autonomous systems essentials that general readers unfamiliar with them need to understand: the sliding scale of automation and autonomy; systems with humans “in,” “on,” or “out” of the loop; developments in artificial neural networks and machine decision-making. Scharre is careful to break down key concepts (including explaining what a “weapon” is) so that those not used to military and technical parlance can keep up. While this might be somewhat basic for engineers or military personnel, it allows “Army of None” to build the background to the points the book makes about what exactly will (and will not) be changing about future weapons systems and the challenges they pose in law, ethics, strategy, and policy.

One of these challenges is a topic basic to engineering— the problem of when and how engineered systems fail—and in this case, when and how highly complex autonomous systems fail. Because the fact is that systems do fail, and there are whole sub-disciplines of engineering, such as reliability engineering, devoted to it. How can we trust autonomous systems if we are not able to predict exactly how they might behave? Software systems—whether standalone software or software as part of a larger robotic machine—written in code are notoriously opaque. Will these complex systems be able to tell us if something is going wrong or what that might be? Here, “Army of None” turns to the debate between “normal accident theory” (which says that accidents are inevitable in fast-paced and highly-interactive systems, with unpredictable and sometimes deadly results), on the one hand, and theories of “high-reliability organizations” able to manage risky systems through a system of rigorous procedures and practices, on the other.

While normal accident/high-reliability theory has been applied to nuclear weapons, the fact that AI and autonomous weapons are very new and there is much we do not know about how they will fail makes this an appropriate framework to understand the challenges. Scharre notes that the solution to this dilemma—more complex systems that can handle more rules—might make them even more difficult to understand and, ironically, control. When combined with machine systems that learn lessons for themselves, this may become even more of a problem.

While Scharre devotes much of the book to explaining recent developments and technical challenges, he does not shy away from legal and ethical issues. He speaks with the lead activists from the “Campaign to Stop Killer Robots” about their concerns and provides a historical overview of attempts to ban weapons, dating back to the Middle Ages. Scharre has been a participant in these discussions at the United Nation’s Convention on Conventional Weapons in Geneva and can speak first hand to the challenges of creating international legislation around these issues. If we cannot even define what “meaningful human control” is, how can we even begin to discuss a ban? Yet Scharre notes that “no country has suggested that it would be acceptable for there to be no human involvement whatsoever in decisions about the use of lethal force.” This, at the very least, might be a place to start.

But what I appreciated most about the book is that Scharre is not a technological determinist. He leaves a lot of room for the role of humans in establishing what kinds of tasks AI and autonomous systems will be used for in military and national security matters. Indeed, he notes that within the U.S. military there is internal resistance and a level of uncomfortableness with fully autonomous weapons—it is not yet clear what tasks such weapons might be used for. Instead, one important reason that seems to be driving states to develop these weapons is a fear that other states might be as well—a very human security dilemma in the age of autonomy.

If there are faults in the book, they are minor. One thing I did not appreciate (though I suspect I will be in the minority here) was over-reliance on pop-culture references. Hardly a page goes by or a case study examined without some kind of film being discussed. While this will certainly help some understand the ideas that Scharre is getting across, it is not clear to me that pop culture is always the best way to understand a lot of these issues. Films can distort rather than illustrate. Sci-fi metaphors and analogies are powerful—but they invite powerfully wrong understandings of AI, robotics, and autonomous weapons in the present tense. Indeed, it is very unlikely—only in the imagination—that any of the actual present or future weapons that “Army of None” discusses will be like “Terminator”—so why so many cameo appearances? Fortunately, Scharre balances out James Cameron with interviews with actual robot scientists and policymakers, still, a cautionary statement on the limits of movies as the best way to understanding these issues would have been appreciated.

Second, as with many writings on AI and artificial intelligence in American national security and defense, “Army of None” does not really discuss the problems these new technologies will pose for alliances. While America can fight alone, it often prefers to do so with other like-minded states. But what happens when many of America’s allies can no longer keep up with its autonomous arsenal? The development of these systems by states is not equal. Further, while Scharre discusses what might happen when adversaries possess military AI systems, but how to we ensure that the AI allies use is interoperable among them, particularly when so much of the technology might be secretive? I would have appreciated Scharre’s insights on these issues.

Nevertheless, “Army of None” is excellent and a must-read for anyone who is interested or working in these areas. Indeed, the fact that this book will appeal to so many is one of its core strengths. In this sense it is very much like Peter W. Singer’s successful “Wired for War”—and it could even be seen as a useful update to it a decade later. Ideally, this book may start conversations between policy-makers, lawyers, activists, militaries and engineers creating these machines, helping them to grapple with the implications of increasing autonomy in warfare.


Stephanie Carvin is an associate professor with the Norman Paterson School of International Affairs at Carleton University and co-author of “Intelligence Analysis and Policy Making: The Canadian Experience,” recently published with Stanford University Press.

Subscribe to Lawfare