Cybersecurity & Tech

How the United States Can Set International Norms for Military Use of AI

Lauren Kahn
Sunday, January 21, 2024, 9:00 AM

The Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy is a crucial step for developing international standards.

Vice President Kamala Harris, UK Prime Minister Rishi Sunak, and other world leaders meet at the AI Safety Summit in Milton Keynes, United Kingdom, on Nov. 2, 2023. Photo credit: Simon Walker/No. 10 Downing Street via Flickr; CC BY-NC-ND 2.0 DEED.

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: AI is spreading to militaries around the world, but its governance is weak, creating the potential for accidents, inadvertent escalation, and other problems. Lauren Kahn, of Georgetown University’s Center for Strategic and Emerging Technology, argues that the U.S.-backed Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy is an important first step, but that it is vital to build on it in order to develop effective norms around the military uses of AI.

Daniel Byman

***

At the AI Safety Summit in the United Kingdom in November 2023, U.S. Vice President Kamala Harris highlighted a crucial milestone in the international governance of artificial intelligence (AI). She announced that 31 countries had endorsed the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy amid a flurry of recent U.S. initiatives wrestling with AI challenges. The declaration, which was first adopted at the Responsible AI in the Military Domain (REAIM) summit in February 2023, lays out a series of key principles for military applications of AI. Since the announcement, the number of endorsers has grown to 49, with the list of states spanning all five of the United Nations’ regional groups.

The political declaration has arrived at a critical time: AI has a growing presence on the battlefield and in war rooms. While AI development advances rapidly, our understanding of these systems is trailing. Coupled with increasing geopolitical tensions, ill-considered and poorly informed deployments of military applications of AI could make accidents and miscalculations more likely—especially if nascent international AI governance efforts fail to extend beyond commercial uses or autonomous weapons.

The political declaration is a significant, collaborative step toward establishing international norms for the use of AI in military contexts. It provides both the necessary momentum and the opportunity for states to make real progress. However, additional initiatives and concerted international efforts will be required for the declaration to be effective.

Gaps in Global Military AI Governance Proposals

The ongoing Israel-Hamas and Russia-Ukraine conflicts vividly illustrate the increasingly active roles AI and autonomy are playing in warfare—from algorithms that optimize artillery fire to target-identifying computer vision models, which are said to work almost 50 times faster than human teams. The use of AI also extends beyond kinetic military action. AI is a general-purpose, enabling technology much more like electricity or the combustion engine than a specific weapon or platform like a nuclear weapon, a Patriot missile, or an aircraft carrier. Its increasing prevalence in all aspects of warfighting—from planning, wargaming, and intelligence to targeting and maneuvering on the battlefield—highlights AI’s versatility and extensive applicability. Already, militaries around the world (especially the United States and China) are investing heavily in AI for intelligence, surveillance, and reconnaissance (ISR), logistics, cybersecurity, command and control, various semiautonomous and autonomous vehicles, and for generating efficiencies in day-to-day functions, including logistics, recruiting, payroll, maintenance, and more. Given AI’s general-purpose applicability and origins in the commercial sector, a broad range of AI capabilities will likely diffuse quickly to a wide array of state and non-state actors.

International deliberations on the military use of AI have focused largely on lethal autonomous weapons systems (LAWS), colloquially referred to, usually by their critics, as “killer robots.” For instance, a group of governmental experts in the Convention on Certain Conventional Weapons (CCW) at the United Nations has been engaged in open-ended discussions on LAWS since 2013. The United Nations recently approved a resolution calling on member states to submit their views on LAWS.

The discussion of LAWS addresses only a fraction of AI’s military applications. The “Call to Action” announced alongside the political declaration at the REAIM summit adopts a broader perspective. While successful in garnering support and engagement, it primarily urges signatories to acknowledge AI’s growing military significance and the importance of safety and ethical standards without introducing substantial new measures or guidelines.

Other unilateral, bilateral, and multilateral AI governance efforts have focused on safe, ethical, and responsible uses of AI. These have often—correctly—created carve-outs and exclusions for military applications, acknowledging the different sets of rules, expectations, norms, and uses that apply. For example, the White House’s executive order on AI lacked substantial coverage of AI in security, which it instead elaborated on through a National Security Council memorandum. Other multilateral endeavors, like the recent Bletchley Declaration that emerged from the AI Safety Summit held in the United Kingdom in November 2023, applied only to commercial applications of AI.

This approach has created a notable gap in international AI governance as these carve-outs have not been adequately supplemented with corresponding, parallel initiatives aimed at the military domain. The failure of policymakers and diplomats to address uses of AI beyond LAWS to include other, more general military applications makes conflicts and international competition subject to avoidable harm, accidents, and potential strategic instability. AI is often brittle, meaning that any combination of shortcomings or vulnerabilities such as bias, poisoning attacks, lack of sufficient training data, or alignment difficulties could result in systems that operate flawlessly in test environments but then falter or behave unexpectedly in real-world conditions, leading to critical failures in missions. These failures could easily be misconstrued by adversaries and, with the lack of transparency surrounding many AI systems, could complicate how they are interpreted and heighten the risk of inadvertent escalation and unintentional conflict.

The Political Declaration’s Unique Focus

The political declaration represents the most comprehensive state-level multilateral confidence-building measure for military applications of AI. Confidence-building measures are a class of information-sharing and transparency-enhancing arrangements and tools. Popularized during the Cold War, they were employed to help great-power competitors manage uncertainty, ultimately aiming to reduce the risk of unintentional nuclear war. Confidence-building measures like the political declaration can help manage AI risks by clarifying intentions and, most critically, encouraging countries to ensure the systems they employ are reliable and less likely to malfunction.

Critically, the political declaration applies to all applications of AI and autonomy in military contexts, not simply the “lethal” or weapon systems-based ones. The United States committed to 12 principles in the original declaration unveiled at the REAIM conference in the Netherlands in February 2023. These included ensuring AI systems had auditable methodologies, requiring bias mitigation, and guaranteeing that AI systems are designed and engineered in alignment with their intended functions—and that those functions are explicit and well defined. The revised version consolidates these into 10 principles that also add new elements. The additions include material addressing the unique challenges that emerge when humans interact with AI, such as the likelihood for humans to overly trust and overly defer to machines (known as automation bias). It also discusses countries’ use of military AI to “enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict.” The revision also removes some other aspects, such as the provision relating to nuclear weapons decisions.

The political declaration stands out for two compelling reasons. First, it fills a significant gap in international AI governance, covering all military uses of AI—no carve-outs or exclusions. The political declaration establishes a solid foundation for understanding and mitigating potential AI risks by encompassing a complete spectrum of AI use-cases. Furthermore, it expands the scope for collaboration with diverse stakeholders. The political declaration, though focused only on uses of AI and autonomy by militaries and therefore limited to nation-states, can also facilitate engagement with the private sector, including companies that develop AI technologies that may be dual use and not inherently lethal, and the nongovernmental organization community.

Second, the breadth of participation in the declaration is crucial. A U.S. State Department official noted when the declaration was first announced that the United States was seeking other states to endorse the agreement: “We would like to expand that to go out to a much broader set of countries and begin getting international buy-in, not just a NATO buy-in, but in Asia, buy-in from countries in Latin America.” This inclusive approach recognizes that employing AI in military contexts is not only relevant to great-power competitors. Notably, the 49 states currently committed to the declaration include close U.S. allies but also a wider range of stakeholders, including leading nations in AI development, like Singapore, and those looking to harness AI to supercharge digitalization efforts, like Malawi.

First Step Versus Only Step

The political declaration is a starting point and a necessary one at that. But it cannot be the only step toward the globally responsible military use of AI and autonomy.

Confidence-building measures like the political declaration are meant to be cumulative and complementary and over time develop norms—in this case, norms for AI use. Confidence-building measures worked during the Cold War because they mutually reinforced one another, addressing various aspects and angles of the same problem—reducing the risks of increased opacity associated with introducing new technology. For example, President Eisenhower’s 1955 Open Skies proposal, the establishment of communication pathways like the Washington-Moscow hotline, the 1972 Incidents at Sea Agreement, and the requirement of advance notification of major troop movements and maneuvers included in the 1975 Helsinki Final Act all focused on separate, narrowly defined aspects of the same issue—reducing the possibility that actions might be misinterpreted in ways that lead to escalation.

Similarly, to have the most impact, the political declaration will need to become one of many initiatives laying out guidelines for the military use of AI that, taken together, converge on key elements and collectively shape responsible behavior in this domain. The most crucial aspect now is to sustain the momentum and progress toward implementing the declaration and determining a way to measure global progress on these principles.

Continued conversation among signatories is essential for clarity, transparency, and sharing lessons learned from domestic and international efforts in adhering to the principles. The United States has set a precedent in military AI with its commitment to transparency and leadership, as shown by its early adoption of Department of Defense Directive 3000.09 (Autonomy in Weapon Systems) and its emphasis on ethical principles in its AI and data strategy. However, the initial absence of proactive engagement with other states to encourage similar approaches has been a missed opportunity. Greater progress might have been achieved if the United States had actively promoted the adoption of comparable policies by other countries in its effort to “lead by example.” Since 2020, there has been a concerted push to increase the number of venues available to states to share best practices on military AI, mainly through the Defense Department’s AI Partnership for Defense. However, that forum has a limited reach of 16 participating countries. The political declaration allows more states to take proactive action and for the United States to continue leading. The United States has acknowledged this, emphasizing that this is merely the “beginning of a process,” with the first follow-up meeting of the declaration’s signatories already set for the first quarter of 2024.

In addition to direct follow-on meetings, more parallel, unilateral, bilateral, and multilateral efforts should be set up to directly address how to implement the principles outlined in the declaration and manage specific AI use-cases not explicitly covered. For instance, ongoing efforts by the United Nations focusing on autonomous weapon systems support and help reinforce the principles outlined in the declaration and demonstrate how they might be implemented in practice for specific classes of systems. Because the updated declaration omits a distinct commitment to AI in nuclear contexts, nuclear powers should pursue a separate, dedicated follow-on commitment. Given that the principle of maintaining human involvement and control in nuclear decision-making is already in the U.S. Nuclear Posture Review and has been implicitly established in practice by the United States, France, and the United Kingdom, this could be an easy win. Other initiatives more directly related to elements in the political declaration include establishing Track II dialogues with technologists and experts focusing on best practices for evaluating machine learning processes—some of which have already begun.

The character of warfare is changing. New, potentially transformative capabilities are leading states to change how they pursue their national interests and fight wars. Institutionalizing norms that promote responsible military use of AI and autonomy needs to be a national and international priority. The political declaration is a significant accomplishment, but it is only the beginning.


Lauren Kahn is a senior research analyst at Georgetown’s Center for Security and Emerging Technology (CSET) focused on national security applications of artificial intelligence. Prior to CSET, she was a research fellow at the Council on Foreign Relations, where she worked on defense innovation and the impact of emerging technologies on international security, with a particular emphasis on AI.

Subscribe to Lawfare