Armed Conflict Cybersecurity & Tech

Accountability for Algorithmic Autonomy in War

Gabriella Blum, Dustin Lewis, Naz Modirzadeh
Monday, September 12, 2016, 10:05 AM

The Defense Science Board recently identified five “stretch problems”—goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new autonomous capability into widespread application:

Published by The Lawfare Institute
in Cooperation With
Brookings

The Defense Science Board recently identified five “stretch problems”—goals that are “hard-but-not-too-hard” and that have a purpose of accelerating the process of bringing a new autonomous capability into widespread application:

  • Generating “future loop options” (that is, “using interpretation of massive data including social media and rapidly generated strategic options”);
  • Enabling autonomous swarms (that is, “deny[ing] the enemy’s ability to disrupt through quantity by launching overwhelming numbers of low‐cost assets that cooperate to defeat the threat”);
  • Intrusion detection on the Internet of Things (that is, “defeat[ing] adversary intrusions in the vast network of commercial sensors and devices by autonomously discovering subtle indicators of compromise hidden within a flood of ordinary traffic”);
  • Building autonomous cyber-resilient military vehicle systems (that is, “trust[ing] that … platforms are resilient to cyber‐attack through autonomous system integrity validation and recovery”); and
  • Planning autonomous air operations (that is, “operat[ing] inside adversary timelines by continuously planning and replanning tactical operations using autonomous ISR [intelligence, surveillance, and reconnaissance] analysis, interpretation, option generation, and resource allocation”).

Even if none of these goals is achieved, the Defense Science Board illustrates how the United States has already benefited in recent years from commercial and military developments in autonomous technologies in terms of “battlespace awareness,” protection, “force application,” and logistics. And, to be certain, the United States is far from alone in pursuing a qualitative edge through such technologies.

How should policymakers, technologists, armed forces, lawyers, and others conceptualize accountability for technical autonomy in relation to war? In a recently-published briefing report from the Harvard Law School Program on International Law and Armed Conflict, we devise a new concept: war algorithms. We define a war algorithm as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.

Why focus on war algorithms? The background idea is that authority and power are increasingly expressed algorithmically—in war as in so many other areas of modern life.

This is not a wholly new phenomenon, of course. For decades, for instance, military systems used algorithms to help intercept inbound missiles. But the underlying concern is becoming more pressing. That is due in no small part to recent advancements in artificial intelligence (AI), which mean, among other things, that learning algorithms and architectures are becoming more capable of human-level performance in previously-intractable AI domains.

The underlying algorithms are developed by programmers and are expressed in computer code. Yet some of these algorithms seem to challenge key concepts—including attribution, control, foreseeability, and reconstructability—that underpin legal frameworks regulating war and other related accountability regimes.

As we see it, the current crux is whether certain advances in technology are susceptible to regulation and, if so, whether and how they should be regulated. In particular, we are concerned with technologies capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment.

To date, much of the discourse has revolved around an as-yet-undefined concept: “autonomous weapon systems” (AWS). As we document, more than two-dozen states have endorsed the notion of “meaningful human control” over AWS. Yet for an array of reasons elaborated in the report, so far the AWS framing has eluded meaningful regulation.

Alongside offering “war algorithms” as a new organizing theme, we build on the asymptotically growing amount of scholarship and policy analyses on AWS. To broaden our approach and to seek critical feedback, earlier this year one of us conducted an academic exchange in China on our initial analysis.

In sum, we sought to provide not only a new conceptual frame but also a resource for policymakers, technologists, lawyers, advocates, and others concerned with accountability:

  • We highlight developments in AI;
  • We profile over three-dozen weapons, weapon systems, and weapon platforms that certain commentators have characterized as autonomous weapons;
  • We describe how a handful of states—including France, the Netherlands, Switzerland, the United Kingdom, and the United States—have considered or formally adopted some of the most elaborate definitions relevant to AWS;
  • We consider how war algorithms might implicate an assortment of fields of international law—not only international humanitarian law (also known as the law of armed conflict) and international criminal law but also such fields as arms-transfer law, space law, and international human rights law;
  • We sketch a three-part accountability approach that, while non-exhaustive, aims to give a thumbnail view of existing and possible regulatory routes;
  • We provide a bibliography with over 400 analytical resources on technical autonomy in war; and
  • We excerpt and catalog dozens of states’ positions on autonomous weapons.

We focus largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. Among the accountability avenues we focus on therefore include state responsibility for internationally wrongful acts and individual responsibility for international crimes, two vitally important regimes that should not be discounted. We also illustrate how so far states have addressed AWS largely through the framework of the Convention on Certain Conventional Weapons (CCW). We note, however, that it is currently unclear whether states will provide sufficient political backing for a CCW process on AWS and that, in any event, the AWS framing is limited to weapons, thus excluding other algorithmically-derived functions related to war (however important and widespread those might increasingly be).

We also highlight less formal and unconventional options—including at the domestic and transnational levels—to consider in order to effectively hold someone or some entity answerable for the design, development, or use of a war algorithm. Think fostering normative design of technical architectures, establishing codes of conduct, and promoting community self-regulation among technologists.

What’s the upshot of our research? More war algorithms are on the horizon—not only in the U.S. but in many technologically-sophisticated states around the world. That should prompt all of us to consider an array of accountability approaches—traditional and unconventional, local and multilateral alike.


Gabriella Blum is the Rita E. Hauser Professor of Human Rights and Humanitarian Law at Harvard Law School.
Dustin A. Lewis is the Research Director for Harvard Law School’s Program on International Law and Armed Conflict. He is also an Associate Senior Researcher in the Armament and Disarmament Cluster of the Stockholm International Peace Research Institute.
Naz K. Modirzadeh is a Professor of Practice at Harvard Law School and the founding Director of the HLS Program on International Law and Armed Conflict. She writes and teaches primarily in the field of public international law, with a focus on non-use of force, armed conflict, the U.N. Security Council, and counterterrorism issues. Modirzadeh is on the Board of Trustees of the International Crisis Group and is a non-resident Senior Fellow at the Lieber Institute for Law and Warfare at the U.S. Military Academy at West Point.

Subscribe to Lawfare