Congress Cybersecurity & Tech

Regulating National Security AI Like Covert Action?

Ashley Deeks
Tuesday, July 25, 2023, 1:48 PM
Congress could use the covert action statute as a model to ensure the careful use of high-risk national security AI.
The U.S. Capitol Building in Washington, February 22, 2008. (Joe Zierer, https://www.flickr.com/photos/77155994@N00/2283945342/; CC BY-NC 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. In June, Sen. Chuck Schumer (D-N.Y.) launched a framework to regulate AI, a plan that offered high-level objectives and plans to convene nine panels to discuss hard questions, but no specific legislative language. Sen. Michael Bennet (D-Colo.) has advocated for a new federal agency to regulate AI. With others, Rep. Ted Lieu (D-Calif.) is proposing to create a National Commission on Artificial Intelligence. And at a more granular level, Sen. Gary Peters (D-Mich.) has proposed three AI bills that would focus on the government as a major purchaser and user of AI, requiring agencies to be transparent about their use of AI, to create an appeals process for citizens wronged by automated government decision-making, and to require agencies to appoint chief AI officers. But only a few of these proposed provisions implicate national security-related AI, and none create any kind of framework regulation for such tools.

Yet AI systems developed and used by U.S. intelligence and military agencies seem just as likely to create significant risks as publicly available AI does. These risks will likely fall on the U.S. government itself, not on consumers, who are the focus of most of the current legislative proposals. If a national security agency deploys an ill-conceived or unsafe AI system, it could derail U.S. military and foreign policy goals, destabilize interstate relations, and invite other states to retaliate in kind. Both the Defense Department and the intelligence community have issued policy documents reflecting their interest in ensuring that they deploy only reliable AI, but history suggests that it is still important to establish a basic statutory framework within which these agencies must work.

This challenge—trying to ensure that the risks that the U.S. national security bureaucracy takes are sensible, deliberate, and manageable—is not entirely novel. Congress has enacted a number of laws that create formalized processes by which the president must notify it of certain high-risk national security measures that he chooses to take. Some of these statutes create a baseline standard for presidential action, such as the covert action statute’s requirement that the president find that a particular action is “necessary to support identifiable foreign policy objectives of the United States and is important to the national security of the United States.” That statute also requires the president to share covert action findings with congressional leadership and intelligence committees, generally before the action takes place. The War Powers Resolution is another example: It requires the president to notify Congress within 48 hours when he introduces U.S. forces into hostilities without underlying congressional authorization to do so, and it requires him to remove those forces from hostilities within 60 or 90 days if Congress does not subsequently authorize their deployment.

Like the War Powers Resolution, the covert action statute helps ensure that the president’s use of a high-risk tool is legal and carefully evaluated, and it holds the president directly accountable for the decision to use it. The requirement that he report a finding to select members of Congress before the covert action occurs provides some transparency to a discrete set of non-executive actors who can provide a reality check about the need for a covert approach and the risks if the U.S. role is revealed. And the requirement that the president attest that the use of a particular covert action is necessary to support a specific U.S. foreign policy objective helps ensure that the executive has considered alternatives and found them inferior to the proposed covert action. It took years, and various iterations, before Congress landed on today’s version of the covert action statute, which came in response to scandals such as those uncovered by the Church Committee, as well as the later Iran-Contra affair.

Several elements in the covert action statute, including a baseline standard, presidential authorization, and congressional reporting rules, would translate well into a new law that addresses high-risk uses of national security AI not otherwise covered by existing reporting statutes. The purpose of an AI framework statute would be to ensure that the president himself approves the deployment of such high-risk uses, that senior policymakers and lawyers in the executive branch have the opportunity to debate those uses, and that Congress is aware that the United States is using such tools. 

The first thing a statute would need to do is to clearly define what types of AI tools would trigger its requirements. The highest-risk AI tools include those that could autonomously initiate an armed conflict or the use of nuclear weapons; those that autonomously initiate the use of kinetic force; those that lead directly to decisions to detain or prosecute people; and those that, if discovered by adversaries, might lead to dangerous geopolitical tensions. In short, the statute should focus on tools that pose a significant risk of loss of life or a reasonably foreseeable risk of serious damage to the diplomatic or military relations of the United States if the existence or use of the tool were disclosed without authorization. The statute could provide that the president may not authorize the risk of a high-risk AI tool unless he first signs an “AI Determination” that briefly describes the tool, determines that the use of such tool is necessary to promote U.S. national security, identifies which agency or agencies is authorized to use it, assesses that its use would not violate the Constitution or U.S. law, and concludes that the benefits of use outweigh the risks.

The statute also should require that the president notify certain congressional committees within a short period of time that he has signed an AI finding. As with the covert action statute, Congress could require that the president keep those committees “fully and currently informed” about the ongoing use of high-risk AI tools, including significant failures and significant changes in the tool’s use. Finally, Congress might consider a provision to the effect that “no use of high risk AI may be conducted which are intended to or have a high likelihood of influencing United States political processes, public opinion, policies, or media,” along the lines of 50 U.S.C. § 3093(f). This would limit the government’s ability to use deepfakes, for example, where it was highly likely that the deepfakes would (even unintentionally) influence U.S. public opinion.

Congress could modify this proposal along various axes. For example, if Congress and the executive believed that requiring presidential signoff was too onerous or time-consuming, an alternative would be to require Cabinet-level officials to sign AI determinations for any high-risk AI that their agencies deploy and submit those determinations to their relevant committees. Likewise, Congress could make the list of what AI tools would be covered more or less capacious. The Defense Department and the intelligence community might require different framework statutes, given that they have different missions, underlying statutory authorities, and oversight committees.

This type of framework statute would likely prompt the executive to establish an interagency process to draft the AI determinations and review the legality of their contents. In the covert action setting, various administrations have established an interagency lawyers’ group to review draft findings, which helps ensure that the proposed actions do not violate U.S. law. A framework statute for national security AI would likewise ensure that both Congress and the president know when U.S. national security agencies are deploying the most sensitive, powerful, and risky AI tools to make battlefield and intelligence decisions.


Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare