Armed Conflict Foreign Relations & International Law

Too Much Too Soon: China, the U.S., and Autonomy in Nuclear Command and Control

Ashley Deeks
Monday, December 4, 2023, 3:18 PM

China won’t yet commit to keep autonomy out of its nuclear command and control. It will take a lot more talking to get there.

Lieutenant Kevin R. McCluney monitors a Minuteman III ICBM launch control capsule in Colorado. (Historic American Engineering Record, https://tinyurl.com/ye7mmb2n; Public Domain, https://creativecommons.org/public-domain/)

Published by The Lawfare Institute
in Cooperation With
Brookings

News reports leading up to the meeting between President Joe Biden and Chinese President Xi Jinping in November indicated that the United States sought to reach an agreement to keep autonomy out of nuclear command-and-control systems (C2)—or at least set up a formal dialogue on the issue. A South China Morning Post headline, citing unidentified sources, proclaimed: “Biden, Xi set to pledge ban on AI in autonomous weapons like drones, nuclear warhead control: sources.” Before the meeting, an Indo-Pacific expert at the German Marshall Fund told the press that China had signaled interest in discussing norms and rules for artificial intelligence (AI), something Biden’s team surely knew as well. The administration seemingly sought to capitalize on that interest by seeking a meeting of the minds on the narrow but important topic of nuclear C2. 

But the U.S. aspirations, while laudable, proved to be too ambitious. As the New York Times reported, “On one of the critical issues, barring the use of artificial intelligence in the command and control systems of their nuclear arsenals, no formal set of discussions was established. Instead, Mr. Biden’s aides said that Jake Sullivan, the national security adviser, would keep talking with Wang Yi, China’s chief foreign affairs official.” 

It is not surprising that the Biden administration was not able to make more progress with China. First, the United States and China do not seem to have had many direct bilateral conversations about military AI to date. Reaching agreement about nuclear AI—even in a statement that would be nonbinding—was an ambitious goal.

Second, the United States already committed unilaterally to maintaining human control over nuclear decisions, so the U.S. government would not have had to “give up” anything to reach a bilateral commitment about nuclear C2. In early November, for instance, Deputy Secretary of Defense Kathleen Hicks affirmed, “[I]n the 2022 Nuclear Posture Review, the United States made clear that in all cases, we will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons.” But the fact that the United States had already agreed to this norm might have—ironically—made it less attractive to the People’s Republic of China (PRC). Or there might have been an element of “reactive devaluation” in play. This cognitive bias reflects “the fact that the very offer of a particular proposal or concession—especially if the offer comes from an adversary—may diminish its apparent value or attractiveness in the eyes of the recipient.” That is, even if the PRC generally thinks that keeping autonomy out of nuclear C2 is a good idea, the lack of trust between the two states, the fact that the United States would have conceded nothing new by adhering to this bilateral commitment, and the fact that it was the United States proposing the idea all might have contributed to the negative (or at least very cautious) Chinese response.

The PRC, it turns out, is not the only state that is unprepared to commit to excluding autonomy from its nuclear C2 systems. In February, the United States put forward a political declaration on military AI that noted, among other things, that states should maintain a human in the loop in nuclear C2. The United States removed this provision from the November version of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy (which itself is not binding). Speaking at the United Nations on Nov. 13, Under Secretary of State for Arms Control and International Security Bonnie Jenkins explained,

The February version included a statement based on a commitment the United States made together with France and the United Kingdom to ‘maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.’ In our consultations with others, it became clear that, while many welcomed this statement of assurance, the nexus between AI and nuclear weapons is an area that requires much more discussion.  We removed this statement so that it did not become a stumbling block for States to endorse.

It is not clear which state or states other than China balked at this provision and precisely why they did so, but it demonstrates that China is not the only state cautious about this idea.

The U.S. government is to be commended for fleshing out in greater detail some norms that states should pursue for their military AI systems, including through its political declaration and its efforts to open a military AI dialogue with China. But these recent developments further illustrate how difficult it will be to obtain legally binding international agreements—even very narrow ones—among states that are actively pursuing military AI. As I wrote in an earlier Lawfare paper, “[R]egulation of national security AI is more likely to follow the path of hostile cyber operations, which have seen limited large-scale international agreement about new rules. Absent a precipitating crisis, small-group cooperation and unilateral efforts to develop settled expectations around the use of national security AI are far more likely.” It will be a good sign if future U.S.-China dialogues about AI—even informal and low-profile ones—proceed, as these meetings will give the United States more chances to explain to China how the Defense Department is trying to establish strong, high-level oversight over uses of military AI. But the bilateral trust between the United States and China is so low and the verification problems are so hard that it may take a while before the two states reach a shared view about keeping autonomy out of nuclear C2.


Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare