Armed Conflict Cybersecurity & Tech Foreign Relations & International Law Lawfare News

Will Killer Robots Be Banned? Lessons from Past Civil Society Campaigns

Michael C. Horowitz, Julia M. Macdonald
Sunday, November 5, 2017, 10:00 AM

Editor’s Note: One of the most successful NGO anti-war efforts was the campaign to ban landmines, which led to a treaty banning their use and production in 1997. Activists, not surprisingly, are using this model as they focus on other technologies, including autonomous weapons systems—and yes, here is a Terminator link.

Photo Credit: Campaign to Stop Killer Robots via RTVE

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: One of the most successful NGO anti-war efforts was the campaign to ban landmines, which led to a treaty banning their use and production in 1997. Activists, not surprisingly, are using this model as they focus on other technologies, including autonomous weapons systems—and yes, here is a Terminator link. Mike Horowitz of Penn and Julia Macdonald of the University of Denver assess the reasons the anti-landmine campaign succeeded and argue that many of the factors are different when it comes to autonomous weapons.

***

After several years of debate, on November 13-17, 2017, the Convention on Certain Conventional Weapons (CCW) will convene a Group of Governmental Experts (GGE) to discuss the topic of lethal autonomous weapon systems (LAWS), more popularly called “killer robots.” While the precise nature of these weapon systems is still the subject of debate, they are generally considered weapons that can select and engage targets on their own. Academics, policymakers, and technology leaders have raised questions about the risks of reducing human control by deploying weapon systems able to select and engage targets on their own.

The Campaign to Stop Killer Robots is an umbrella organization that includes many non-governmental organizations (NGOs) with shared concerns about LAWS. Collectively, the Campaign advocates for their preemptive ban, claiming that the introduction of such weapons would violate international humanitarian law and risk devastating consequences to civilian populations. They are not the only groups or people concerned about LAWS. Tesla founder Elon Musk argues that artificial intelligence could cause World War III. A 2017 open letter from the Future of Life Institute, signed by dozens of researchers and executives who work on autonomous systems and robotics, states that making progress on the LAWS issue at the GGE meeting is vital for humanity.

An interesting facet of the Campaign to Stop Killer Robots is the way it seeks, in ways large and small, to emulate the success of an NGO movement from the late 20th century: the International Campaign to Ban Landmines (ICBL). In 1997, the ICBL successfully lobbied for the negotiation of an international ban on landmines, informally known as the Ottawa Convention. The resemblances between the killer robots campaign and the ICBL, as well as the movement that resulted in the Convention on Cluster Munitions, raises interesting questions about NGO strategy and what may emerge from the GGE process.

The resemblances between the killer robots campaign and the ICBL, as well as the movement that resulted in the Convention on Cluster Munitions, raises interesting questions about NGO strategy...

However, the similarities between the landmines campaign and the Campaign to Stop Killer Robots mask critical differences that will make forging a substantive agreement between states on LAWS hard to achieve. In particular, the lack of consensus around what constitutes a lethal autonomous weapon, uncertainty surrounding their military effectiveness, and the absence (so far) of human casualties from the use of LAWS create significant barriers to achieving an international ban.

The International Campaign to Ban Landmines (ICBL)

Formal efforts to regulate or ban the use of landmines, driven by mounting civilian casualties, began at the first review conference of Convention on Conventional Weapons (CCW) Protocol II in 1995 After the failure to reach agreement through the CCW process, the ICBL began advocating for a new international agreement, outside the CCW framework.

The ICBL rallied global civil society around the cause—a coalition of NGOs striving to achieve the shared goal of an international prohibition on the export and use of landmines. Together, NGOs from around the world raised awareness of the disastrous consequences of these weapons and urged governments to resume negotiations under the auspices of the United Nations. The ICBL campaign ultimately led to the conclusion in December 1997 of the Ottawa Treaty, which prohibited the use, stockpiling, production, and transfer of anti-personnel mines. To date, there are 162 state parties to the treaty, and global stockpiles of landmines are thought to be around 50 million, down from 100 million in the 1990s. The ICBL received the Nobel Peace Prize in 1997 in recognition of its success.

Given the speed with which the treaty came into being, it is worth asking how the ICBL managed to achieve its goal. Research by Charli Carpenter, Richard Price, and others explains the success of this campaign as driven by several factors.

Leadership and Coordination: The ICBL was launched in 1992 by six NGOs with a shared interest in banning landmines. By 1996, under the leadership of Jody Williams, the campaign had grown to become a network of 600 NGOs from over 40 countries. The breadth of membership combined with strong leadership ensured that the movement maintained momentum and could connect the activities of thousands of grassroots activists to an international agenda.

Publicity and Media: Publicity generated by the NGOs’ activities ensured landmines remained a high profile issue, placing pressure on nation-states. International NGOs like Human Rights Watch, supported by the endorsement of the International Committee of the Red Cross (ICRC), produced numerous reports documenting shocking statistics on the number of civilian casualties per week from landmines, the numbers of estimated mines deployed across the globe, and how long they would take to remove. These reports were not only picked up by major media outlets, including the New York Times, Washington Post, and The Economist, among others, but they also served to attract the attention of influential leaders. U.S. President Clinton joined the call for the “eventual elimination” of landmines in 1994, and the UK’s Princess Diana became an ambassador for the ICBL movement when she visited partially cleared sites in Angola and Bosnia. These figures played a key role in raising awareness of the magnitude of the issue.

Legal Framing: The leaders of the ICBL understood that, to be successful, they needed to situate the argument for banning mines within the context of other weapons taboos, and to make their case on the basis of accepted international humanitarian law. In particular, the law of war requires combatants to distinguish between civilian and military targets and to ensure that the extent of any civilian casualties be in proportion with military objectives. ICBL proponents claimed that landmines violated both of these requirements as they do not discriminate between soldier and civilian, and they inflict superfluous harm on civilians by causing either death or permanent injury.

U.N. Support: Finally, the United Nations provided a crucial forum through which the ICBL leadership could meet with and influence governments from around the world. The 1995 CCW Review Conference and preparatory meetings were held under the auspices of the United Nations, with NGO representatives able to participate in a number of ways. The International Committee of the Red Cross, which was also raising awareness about the harms from landmines, attended as an expert observer, submitted proposals, and distributed reports. The ICBL addressed the plenary meeting of the CCW and played a key role in lobbying government delegations. The United Nations therefore provided a unique environment in which civil society could directly engage with country leaders.

Similarities with the Campaign to Stop Killer Robots

Given that the Campaign to Stop Killer Robots is modeled, in part, on the ICBL and the subsequent campaign against cluster munitions (a sister campaign of the ICBL that led to the successful conclusion of the Convention on Cluster Munitions in 2008), it is unsurprising that there are similarities between the campaigns.

The Campaign to Stop Killer Robots was formed in 2012 by ten NGOs, several of which were part of the ICBL. Under the leadership and coordination of former ICBL leaders such as Mary Wareham of Human Rights Watch and Jody Williams, the coalition has already grown to comprise 64 international, regional and national NGOs spanning some 28 countries. Again, the breadth of membership and its networked approach helps the campaign connect grassroots activists with the international stage.

Like the ICBL, the Campaign to Stop Killer Robots has courted media attention by using its coalition of international NGOs to highlight the dangers of lethal autonomous weapons. Human Rights Watch has issued a number of prominent studies on the threat posed by “killer robots,” as has Amnesty International. These reports have received coverage in major media outlets, such as the New York Times. In 2015, thousands of people, led by Tesla’s Elon Musk, signed an open letter to the United Nations urging a ban on weapons that “lack meaningful human control.” In 2017, as referenced above, a subsequent letter, while not explicitly advocating for a ban, highlighted the dangers of lethal autonomous weapon systems. These two documents have increased awareness of the campaign’s ongoing efforts, with prominent endorsements by Musk and physicist Stephen Hawking bringing the arguments of the Campaign to the attention of the larger public.

[T]he leadership of the killer robots campaign has framed their legal argumentation within the existing law of war, emphasizing the principles of distinction and proportionality.

Reminiscent of the ICBL’s approach, the leadership of the killer robots campaign has framed their legal argumentation within the existing law of war, emphasizing the principles of distinction and proportionality. Proponents of the ban, for example, argue that autonomous systems would lack human judgment and the ability to discriminate between soldier and civilian, and/or to evaluate the proportionality of the attack.

Finally, the United Nations has once again provided a key forum for debate. The first multilateral meeting on lethal autonomous systems was held at a CCW meeting in Geneva in 2014. Since that time, there have been annual meetings on the status and future of autonomous weapons at the CCW, culminating in the decision to formalize this deliberation process with a discussion mandate for a Group of Governmental Experts that will meet in November 2017. These meetings provide an environment in which campaigners can engage directly with government officials, as well as coordinate with each other.

Should We Expect the Same Outcome?

One might argue that, due to the similarities in the origins and conduct of these two movements, the Campaign to Stop Killer Robots will also meet its goal of achieving an international ban on lethal autonomous weapon systems. However, despite the many resemblances between the ICBL and the Campaign to Stop Killer Robots, there are differences that will make a ban on LAWS harder to achieve.

First, while it was clear what the ICBL was seeking to prohibit, there is much less definitional clarity around what constitutes a lethal autonomous weapon. As one of the authors of this article, Michael Horowitz, has noted, a key “challenge in the attempt to understand the ethical, legal, moral, strategic and other issues associated with autonomous weapon systems is a basic lack of agreement over what an autonomous weapon actually is.”

Horowitz, Paul Scharre, Rebecca Crootof, and others have pointed out that the definitional issues surrounding LAWS are significant For example, some computer-guided precision weapons arguably select and engage targets after being launched by a person. Are these weapons included in the ban? What about homing munitions? The British government, in contrast, defines autonomous weapon systems as only those systems with advanced cognitive abilities. These are very different ways of thinking about what falls into the category of a lethal autonomous weapon system.

Some NGO-led discussions have focused on the need for “meaningful human control” in the use of force. But that simply shifts the debate from directly defining a lethal autonomous weapon to how to define terms such as “meaningful” in this specific context, leaving much room for disagreement. While in the case of the landmines ban, states agreed to a ban first and a definition later, in the case of LAWS, a definition would almost certainly need to come first given the breadth of the potential category.

Second, while there was clear evidence of the human costs of the use of landmines around the world, the lack of casualties from the use of lethal autonomous weapon systems muddies the ability of the movement to build public support. The pictures of ordinary people injured and maimed by mines, combined with the casualty statistics, played a key role in shocking and shaming governments to take action. It also bolstered the legal argument that mines violated the proportionality and distinction principles of international humanitarian law. Persuading the international community to ban a technology preemptively without observing these human costs will be difficult. Moreover, autonomous weapon systems are a much broader category than blinding lasers, the only previous technology to be subject to a preemptive ban.

[T]he lack of casualties from the use of lethal autonomous weapon systems muddies the ability of the movement to build public support.

Not only is the international community less emotionally affected, but there remains uncertainty as to whether these weapons would, in fact, inherently transgress international law. It is also possible that autonomous weapon systems might reduce civilian casualties in some cases if they have high levels of accuracy, lack human emotions (e.g. revenge), and do not suffer from the same physical limitations as humans, such as fatigue.

Third, and potentially most important, the move to prohibit landmines faced relatively weak opposition in part because these weapons were not seen as necessary for the military effectiveness of armed forces around the world, and especially the armed forces of major powers. While once useful for protecting borders and slowing troops, newer technologies had surpassed mines in their ability to achieve these same goals by the 1990s (with the possible exception, as the United States argued, of the Korean Peninsula).

By contrast, uncertainties about the potential military effectiveness of LAWS differentiate the LAWS case from the landmines case, because states will be more wary of banning weapons that could be useful in the future. In theory, autonomous systems could help militaries defend against aircraft and missile swarms, their ability to operate at machine speed could provide operational advantages in naval and air warfare, and they could prove useful to states in situations when communication and radio links falter or break down. Given the potential advantages that these weapons may offer, and the disadvantages a state could face if its opponent had LAWS and it did not, agreement on a preemptive ban will be more difficult. Investments by China and Russia in autonomous systems further complicate the discussion, since they would be unlikely to follow a convention regulating LAWS if they believed those weapons provided necessary military capabilities. That could influence how other states make their own policies regarding LAWS.

Therefore, while the Campaign to Stop Killer Robots appears to be following a similar playbook as the ICBL, we should be cautious in drawing too many conclusions about the likelihood of a ban on LAWS. The differences between the two issue areas—in particular, the lack of consensus around the definition of lethal autonomous weapons, uncertainty as to their military effectiveness, and the current lack of human casualties from the use of these weapons—will make attaining a preemptive prohibition on their use harder to achieve. That being said, continued dialogue and discussion about what LAWS are is essential to determining something very important: agreement on the proper role of humans in decisions about the use of force, and how to best achieve that aim.


Michael C. Horowitz is the director of Perry World House and professor of political science at the University of Pennsylvania.
Julia M. Macdonald is assistant professor of international relations at the University of Denver’s Korbel School of International Studies.

Subscribe to Lawfare