Armed Conflict Cybersecurity & Tech

‘Slaughterbots’ and Other (Anticipated) Autonomous Weapons Problems

Nicholas Weaver
Tuesday, November 28, 2017, 9:00 AM

The Future of Life Institute recently released “Slaughterbots,” a seven-minute video that looks like an episode of Black Mirror (a science-fiction anthology show focused on technology-induced nightmares).

(Photo: Max Pixel)

Published by The Lawfare Institute
in Cooperation With

The Future of Life Institute recently released “Slaughterbots,” a seven-minute video that looks like an episode of Black Mirror (a science-fiction anthology show focused on technology-induced nightmares). It describes a near future where a defense contractor develops small autonomous unmanned aerial vehicles (UAV), the same size as a small toy and armed with a small, explosively formed penetrator (a small explosive which drives a small piece of metal though its victim’s skull). In short, a “slaughterbot.” It identifies and kills targets based on preprogrammed criteria and aggregated data. For example, the hypothesized UAVs in this video are capable of breaking a few windows, flying into the Senate and killing senators on only one side of the aisle.

The Future of Life Institute’s intent is to highlight the anticipated dangers posed by autonomous weapons. But I fear that autonomous weapons are inevitable; this isn’t a hypothetical threat tomorrow, it is a real threat today. Simple estimation shows that these weapons would not be difficult to mass-produce. While the technology may not be widely available now, once it is, we are facing an issue of unprecedented scale. Our ability to defend ourselves against a threat like the one illustrated in the video is limited. We need to plan for malicious swarms of autonomous weapons in civilian environments.

First, how easily could one build such a weapon? Some back-of-the-envelope design suggests that I could build the computer necessary to run a slaughterbot, including all the inertial sensors and communications, with two outrider cellphone cameras—in a package the size of a sugar cube. Give me a $10 million budget and I could produce slaughterbots with a manufacturing cost of roughly $200 per unit (if I’m producing enough of them). After all, the base airframes and camera without the computer cost about $45.

Further amplifying the concern is that everything involved—from the silicon chips to the machines needed to crank out tens of thousands of these little nightmares—are available off-the-shelf. With only a little bit of smuggling effort, even North Korea could build such things. And since such systems would be autonomous, there will be no communications between an operator and the device to jam.

Slightly larger drones with two- to three-foot wingspans are even easier to produce. These drones can be cheaply manufactured using a consumer-grade 3-D printer, a vacuum forming table and some components ordered from Amazon; instead of requiring a specialized factory they can be produced in a garage. At that stage, if somebody designs a killer UAV, many people can manufacture them. Drug cartels have already started producing armed UAVs and the Islamic State is adapting off-the-shelf drones and turning them into bombers.

We are seeing autonomous operations of UAV swarms not just as things in the research lab but also as student competitions among the U.S. military academies, where students develop software to turn their fleet of individual fixed-wings and quadcopters into a unified swarm tasked with defeating another swarm in air-to-air combat. Not only is the hardware growing cheaper and more common, the software understanding needed is no longer exotic; it is within the reach of advanced engineering students.

Defeating swarms of UAVs is a Defense Advanced Research Projects Agency (DARPA) level problem in a military environment. The current DARPA program is focused on developing systems that can defend a military convoy from attacking swarms. Although the program intends that the results should be low cost and low collateral damage, a military environment may have an uncomfortably high definition of “low.”

But there isn’t the equivalent research and development aimed at the even more challenging problem that we will face in civilian environments—environments where we can’t just open fire with lasers or bullets and where a million-dollar system is considered outrageously expensive. Instead we need systems that are both inexpensive and as safe as possible, so that when an errant round hits someone, it will annoy or injure instead of maim or kill. And we can’t have systems classified as “secret” spread over cities or in the back of police cars.

This is a hard research problem: Governments need to invest the research and development effort now so that, when the need arises, we can crank out anti-drone systems quickly. This effort will require a lot of out-of-the-box thinking. For example, I think that a collaboration between National Science Foundation (NSF) and the Department of Homeland Security (DHS) might fit the bill. Using a large number of small grants (something the NSF is best at) in collaboration with DHS’s focus on implementation will ensure that promising technologies can be made ready for production.

Because I fear the slaughterbots are coming.

Nicholas Weaver is a senior staff researcher focusing on computer security at the International Computer Science Institute in Berkeley, California, and Chief Mad Scientist/CEO/Janitor of Skerry Technologies, a developer of low cost autonomous drones. All opinions are his own.

Subscribe to Lawfare