Published by The Lawfare Institute
in Cooperation With
Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill. As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.In a recent article co-authored with Daniel Reisner, we argue that with proper international and national-level processes, emergent autonomous weapon systems can be effectively regulated within the existing law of armed conflict framework (Anderson, Reisner & Waxman: Adapting the Law of Armed Conflict to Autonomous Weapon Systems). The Times article lays out some of the dangers attendant to these systems as well as their potential benefits, including with respect to preventing civilian collateral damage, which we think are best balanced through application and refinement of traditional law of armed conflict principles and standards. The Times article also discusses the difficulties of line-drawing in any effort to prohibit autonomous systems, which we think cut strongly against ideas promoted by many NGOs for an international treaty ban, and in favor of an incremental approach that adapts as technology evolves.