Cybersecurity & Tech Lawfare News Surveillance & Privacy

Preparing National Security Officials for the Challenges of AI

Steve Bunnell
Tuesday, June 21, 2022, 8:01 AM

A review of James E. Baker, “The Centaur’s Dilemma: National Security Law for the Coming AI Revolution” (Brookings Institution, 2020).

Machine learning and artificial intelligence (Mike Mackenzie,;, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With

A review of James E. Baker, “The Centaur’s Dilemma: National Security Law for the Coming AI Revolution” (Brookings Institution, 2020).


Artificial intelligence (AI) is one of several rapidly emerging technologies that promise to disrupt not only multiple sectors of the U.S. economy but also the manner in which the U.S. government carries out its foundational responsibility to protect national security consistent with the rule of law and constitutional values. This presents an important challenge. Hard legal and ethical questions about national security uses of AI are already myriad and constantly evolving and expanding. How should the U.S. integrate tools like neural language models and facial and image recognition into its intelligence collection and analysis efforts? How much faith should be placed in machine predictions and identifications that no human can fully understand? What sort of oversight is needed to control for bias and protect privacy? To promote public trust? How can the U.S. combat deepfakes by foreign adversaries without running afoul of the First Amendment and free speech values? What level of AI-based predication is sufficient to warrant what types of intelligence, investigative, or military actions? When a decision is made to launch a drone attack against a terrorist target based on AI-based data and image analyses, are humans in the loop, on the loop, or out of the loop?

James E. Baker’s “The Centaur’s Dilemma” is an excellent place to start for any national security policy official or lawyer looking to understand not only what AI can do in a security context but also the current legal and ethical frameworks (or lack thereof) that guide its use in the fast moving world of national security threats and military operations. “The Centaur’s Dilemma” is a thoughtful and crisply written exploration of the implications of AI and the legal, ethical, and normative frameworks that govern and channel the use of AI in the national security realm. 

The topic is of fundamental importance to global security in the 21st century. As the war in Ukraine is demonstrating, AI can play a critical role in both kinetic and nonkinetic domains. Senior U.S. defense officials have confirmed that the U.S. has been sharing AI-produced insights from vast amounts of data and battlefield intelligence with Ukraine, including footage from drones equipped with advanced object recognition and tracking capabilities. And drones are being used not just by military actors, as they were in past conflicts. Large numbers of small cheap and commercially available drones launched by civilians and journalists are having an unprecedented impact. They have documented troop and weapons movements, live combat, military casualties, and details of atrocities in real time, with immediate consequences on the battlefield—as well as providing evidence for future war crime tribunals. 

AI is also being used extensively in the information war. For example, Ukrainian officials, working with citizen volunteers, are reportedly using facial recognition software and social media data to identify the bodies of Russian soldiers killed in Ukraine, notify their families, and provide real-time information about the tragic costs of the war in an effort to counter Russian government censorship and internal propaganda. 

The role of AI in the cyber domain is less public. But it is safe to assume that AI-powered cyberattacks and countermeasures—such as malware that mutates to try to avoid detection by anti-virus software, or the automated creation of highly personalized (and, hence, hard to detect) spear phishing attacks—are critical factors not just in the jockeying for advantage on the battlefield but also as a means to degrade or protect critical infrastructure and, more generally, to create (or defend against) economic and political pressure, confusion, and chaos. The Russians are certainly well aware of the implications of AI. As Baker notes in the book, Vladimir Putin declared in 2016 that “whoever controls [AI] will be the ruler of the world.” Let’s hope that he doesn’t control it before the U.S. does. 

Baker, the director of the Syracuse University Institute for Security Policy and Law, is a former chief judge of the U.S. Court of Appeals for the Armed Forces and a former legal adviser to the National Security Council. Although he is not a technologist, Baker draws on his deep scholarly and practical understanding of national security law to produce a remarkable contribution to the emerging field of AI policy and regulation. Baker’s audience comprises the generalists—national security policymakers; government, military, and private-sector lawyers; leaders in industry and academia—who need to make “informed, purposeful, and accountable decisions about the security uses and governance of AI.” “The Centaur’s Dilemma” begins by providing an excellent layperson’s overview of the history, components, and potential uses of AI, with a focus on specific security uses and risks, including an explanation of controversial applications like LAWS (lethal autonomous weapons systems), swarms, facial recognition, and deepfakes. Baker then turns to the book’s central question—the centaur’s dilemma. A centaur is a creature from Greek mythology that has the upper body of a human and the lower body of a horse. It is an apt metaphor for the challenge of AI in the national security realm—how to harness the speed and power of AI, which may exceed human understanding, but still ensure that humans ultimately retain control of, and accountability for, critical decisions and judgments. 

As Baker explains, “artificial intelligence” is an umbrella term that encompasses a broad range of technologies that leverage advances in computational capacity to optimize tasks of increasing complexity and breadth, including potentially the ability to handle a variety of tasks and to train itself to do so. Futurists and philosophers have long been imagining worlds in which super-powerful machines and computers dominate economics, politics, and society, and what that could mean for collective and individual humanity. That world does not presently exist, and there is a wide range of speculation about when, if ever, it will. But what is beyond debate, is that so-called narrow AI is here today. This is AI that is focused on optimizing a particular task, generally based on the capacity of the AI to identify patterns in large amounts of data at scales and speeds that are increasingly beyond human capacity. And there is a stronger version of task-specific AI, often referred to as artificial general intelligence (AGI), in which an AI-enabled machine is able to shift from task to task, train itself by using data from the internet and other sources, and ultimately rewrite and improve its own programming. The implications of narrow AI, especially as it evolves into stronger, more general, forms, are anything but narrow.

Baker discusses the various legal frameworks that govern national security decisions, weaving in practical insights about how bureaucratic and political pressures can support or undermine sound national security decision-making. He starts at a conceptual level, laying out the three fundamental purposes of national security law: to establish the authority to act and the boundaries of that action, to provide a process for decision-making, and to express the nation’s essential values. He then examines the extent to which current law satisfies those purposes when applied to AI, considering constitutional, statutory, executive orders, and other executive branch policies and guidance.

The legal discussion reflects Baker’s skills as a former judge and includes lucid summaries of the key constitutional principles and case law relevant to AI, followed by clear and accessible analyses of more technical and obscure areas of national security law, including the International Emergency Economic Powers Act (IEEPA), the Invention Secrecy Act, and the Defense Production Act. Baker identifies significant limitations in existing law when it comes to addressing many of the critical new issues raised by AI. Most of the relevant statutory and judicial authorities were developed long before today’s forms of AI were known, and even today the technology is rapidly developing in ways that make it difficult if not impossible for legislators and judges to keep pace.

After identifying these legal gaps, Baker explores several legal regimes that do not apply directly to AI but nonetheless offer potential guidance by analogy: nonproliferation and arms control regimes; the law of armed conflict; and various ethical and oversight mechanisms, including codes of professional conduct, internal review boards, and corporate social responsibility programs and practices. For example, Baker suggests AI policymakers consider the key concepts of the nuclear weapons regime that has developed to regulate the control, proliferation, and potential use of nuclear weapons, including procedural elements such as doctrine, command and control, verification, and confidence building measures. Of course, while the nuclear framework is analogous in some ways, there are important differences. In the context of nuclear weapons, decision-makers may have a few minutes to assess and respond to a potential attack. This is certainly a challenge. But with AI applications, the time constraints may be measured in nanoseconds, not minutes. In a battle between competing algorithms, any hesitation or delay can be fatal. As Baker warns, with AI even more than with nuclear weapons, “pre-delegation to humans and to machines in the form of code is essential, as is advance doctrinal agreement without which decision-makers will not know what to code or delegate.”

Another lesson Baker draws from the experience with nuclear weapons is that “the perfect should not be the enemy of the good enough.” International agreements and norm-setting processes involving the testing, proliferation, and use of nuclear weapons reduce risk in important ways even if they are not legally binding or universally accepted. The same can be said about international agreements governing other types of weapons systems, such as the Biological Weapons Convention and the Chemical Weapons Convention. This wisdom is particularly relevant to a technology like AI that is evolving far faster than international agreements and norms. The speed of technological change is a source of perpetual imperfection. It should be a reason to ensure flexibility in AI regulation, not a justification for inaction.

The United States’ national security apparatus is not known for nimbleness, nor is the law that governs it. When it comes to AI, the risk is not just that our generals will fight tomorrow’s war with yesterday’s strategy but also that the United States will lack the legal and policy guardrails that are essential to a lawful, accountable, and ethical protection of the nation’s security. There is also the further risk that policymakers and operational decision-makers will find themselves making recommendations and decisions involving technologies they barely understand. A basic level of tech literacy among policymakers and operational officials is a precondition for those officials being able to sensibly develop and implement the new laws and new policies that AI requires. “The Centaur’s Dilemma” is not just an important contribution to the scholarly thinking around national security and AI. It is a practical reference book, intended, first and foremost, to empower those in the arena. National security officials and lawyers would be well advised to read it carefully and to keep a copy close at hand.

Steve Bunnell was General Counsel of the Homeland Security Department from 2013-2017.

Subscribe to Lawfare