Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

The Next Step in Military AI Multilateralism

Michael Depp
Tuesday, March 26, 2024, 10:00 AM
Updates to the Political Declaration are good, but the U.S. should seek to engage more countries and double its efforts on human control of nuclear weapons.
U.S. Marine with Black Sea Rotational Force 17.1 launches an unmanned aerial vehicle during exercise, 2017 (U.S. Marine Corps photo by Cpl. Sean J. Berry,; Public Domain)

Published by The Lawfare Institute
in Cooperation With

As part of the deluge of new artificial intelligence (AI) policy documents surrounding the AI Safety Summit in November 2023, the United States released a long-awaited update to its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The most momentous part of the update was the addition of new signatories that joined the United States in agreeing to the principles, including the U.K., Japan, Australia, Singapore, Libya, and the Dominican Republic. This document is not a binding treaty, nor is it a detailed framework for international regulation of military AI. It is, however, a blueprint for a growing consensus on military use of AI, one that will herald a safer and more stable use of this technology in international politics. To further this mission, the United States needs to continue to add more signatories to the declaration and push for consensus on the nuclear norms that were removed from the original version.


The political declaration is the United States’s attempt to set the tone for the debate on military AI and autonomy. It represents a positive American vision of international AI regulation in the face of numerous calls for an international regime. It also comes at the same time as renewed interest aimed at banning lethal autonomous weapons in the UN General Assembly and the resumption of the Group of Governmental Experts, which will discuss autonomous weapons in more detail. The United States has historically resisted full bans in favor of a softer approach of “responsible use,” which the document attempts to concretely outline in advance of these upcoming conversations. 

A first draft of the declaration, released in February 2023 to much fanfare, essentially restated the principles from other U.S. policy documents as protonorms. The document itself was a good blueprint for how to responsibly use AI and autonomy, but its main failing was that the United States was largely speaking alone. Not a single other nation signed on to it publicly. For the United States to be a global AI leader, someone had to be in its camp, and that was, at least publicly, not so when the document was released.

This new version of the declaration seeks to rectify that, and it has been broadly successful by getting more than 50 countries to sign on. The countries signing on to the document have a wide geographic mix: While more than half are European, there are signatories from Africa, Asia, Latin America, and Oceania. They also come from beyond the treaty allies of the United States, representing real outreach.


As a natural part of getting so many other countries to agree to specific language, the document contains numerous text changes. The vast majority of these changes resulted from haggling over individual words. For example, instead of taking “deliberate steps” to reduce bias, the countries agree to take “proactive steps,” indicating the importance of fixing bias before it manifests. What’s more, the new document creates a sturdier commitment to accountability in weapon design by requiring that the “methodologies, data sources, design procedures, and documentation” used to develop AI systems be both transparent and auditable to defense personnel, while the previous iteration required only auditability. The document also adds an entirely new mention of automation bias as a challenge to the context-informed judgments of the user, indicating a stronger focus on effective human-machine teams. And it completely abandoned a largely unnecessary commitment from the states to continue discussing how to implement these practices and evangelize them. This makes sense presuming any state committed enough to sign the document will want to carry out the promises it has made already and is likely to push for others to follow suit. 

All of these changes were either a step in the right direction or exceedingly minor in their focus. But something far more important was also lost in the negotiation process: mention of AI integration with nuclear command and control. The original draft claimed that “[s]tates should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment,” in language similar to the U.S. 2022 Nuclear Posture Review. This statement, or any mention of nuclear command and control, is missing due to the reticence of many of the signatories to legitimize, in their view, nuclear weapons by agreeing to constrain them.

Next Steps for the Political Declaration

This new version of the political declaration is a good move toward more global norms of responsible AI, but as Lauren Kahn notes, this cannot be the only step on this path. It is also only one line of effort among many: Work continues in the United Nations, within other multilateral initiatives like the U.K. AI Safety Summit, and in regional fora such as the European Union and the Association of Southeast Asian Nations. All of these can and should work coterminously, but there is still more that can be done with the political declaration.

The most important step is to continue to broaden the range of countries that agree to the document. It is critical that the United States works with like-minded countries to build these international norms. However, talking only to those who already agree does little to change behavior, it is simply writing down an existing norm. For these norms to actively constrain the actions of governments, those sitting on the fence (and who would not otherwise follow them) need to be engaged with the process, such as those in Latin America, Africa, and Southeast Asia. To a large degree, the United States is already attempting to do this. When the new version of the political declaration was announced, it had only 31 original signatories. Since then, U.S. diplomacy has almost doubled that number, but there is still a long way to go.

In addition to global engagement writ large, the United States has some clear constituencies that must be specifically targeted with this new version of the political declaration. Ideally, as many states as possible should sign the document, but some states are more important than others. To accomplish this goal, the United States can prioritize countries where it hopes to base autonomous systems in the future, such as the Philippines. A lack of agreement on military AI use may pose future issues for military interoperability if countries object to integrating their forces with autonomous systems they view as dangerous or unsafe. The majority of the United States’s closest military allies in Europe and Asia signed, but Norway, an important NATO ally, has yet to do so. Countries like Norway and the Philippines should be the top priority for this additional engagement.

The Orphaned Nuclear Provision

The United States also needs to refocus on the nuclear weapons provision, even if it has to consider a different place to pursue it. As described above, the nuclear omission is understandable given the hesitancy that many nonnuclear states showed. It will also not significantly affect U.S. policy given the nation’s commitment to human control in the 2022 Nuclear Posture Review, but a nuclear agreements is still a sorely needed. Integration of autonomy into nuclear command and control is among the most consequential uses of AI, and the provision’s inclusion would have helped bound the ability of the signatories to take unnecessarily dangerous actions. 

The United States should seek better venues for this provision, given its importance. The principal focus of this effort should be the five permanent members of the UN Security Council, or P5 (China, France, Russia, the United Kingdom, and the United States), which have an outsized influence in the global governance of nuclear weapons. The best place to do this would be the regular P5 Conference Process, which brings together these states to discuss nuclear disarmament and control issues. The effectiveness of this forum waxes and wanes with the broader geopolitics of these countries, but its mission and the regular engagement from the most important nuclear powers makes it an ideal place to push new norms for nuclear weapons. 

There is likely to be some pushback from other states—in particular Russia, which champions nuclear systems that blur the line of human control—but there are other opportunities to explore as well. First, the United States could expand its joint statement with France and the U.K., which notes, “Consistent with long-standing policy, we will maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.” If China and Russia are unwilling to join in a similar call, France, the U.K., and the U.S. can evangelize this document to the wider Treaty on the Non-Proliferation of Nuclear Weapons signatories to build support and pressure Russia and China to agree. 

The P5 are not the only nuclear armed states in the world, and the other countries that have nuclear weapons should also agree to this. India, Pakistan, and Israel also have nuclear weapons and have neither made statements about human control of nuclear weapons nor signed the political declaration. Working with these three countries either to sign on to the joint statement made by France, the U.K., and the U.S. or, if that proves political fraught for them, to issue a national policy that similarly requires human control for their own nuclear enterprise would help make the world a safer place. Regardless of what the arrangement looks like, the most important goal of these specifications for global regulations on nuclear command and control is to get as many nuclear armed states to meaningfully commit to human control of nuclear weapons.


This new iteration of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy has solved the most glaring issue of the original draft. Instead of the United States being a lone voice in a cacophony, it is now part of a growing group speaking in one voice. This group knows the importance of responsible use of AI and autonomy in military affairs, is committed to ensuring that deployed systems are safety-first and cognizant of bias, and focused on thinking through use cases before development. Unfortunately, to achieve this amount of agreement, important provisions regarding nuclear safety had to be removed. Finding a suitable consensus on nuclear weapon controls is a priority, as is broadening the signatories to include the most critical military allies of the United States. This new political declaration builds on the success of the first version, but there is still important work to be done to foster international consensus that should not wait.

Michael Depp is a research associate at the Center for a New American Security, where he focuses on AI safety and stability.

Subscribe to Lawfare