The Case for a Deepfake Equities Process

Daniel Byman, Chongyang Gao, Chris Meserole, V. S. Subrahmanian
Wednesday, May 24, 2023, 9:15 AM
The United States needs to create a government-wide process to carefully weigh if and when it would ever use deepfakes.

Published by The Lawfare Institute
in Cooperation With
Brookings

Experts around the world are concerned with the explosive growth of deepfakes: digitally altered videos and audio that change a person’s appearance and words to spread false information or are otherwise created with malicious intent. Most deepfakes are harmless, such as AI-generated images of the Pope. Other uses, such as non-consensual deepfake pornography, are far more disturbing. A Congressional Research Service report predicts that hostile intelligence services might use deepfakes to embarrass adversary political leaders and create false “evidence” of war crimes, among other dangers. Academic experts Robert Chesney and Danielle Citron warn that the spread of deepfakes may allow dishonest leaders to dismiss genuine evidence of fraud or other criticisms as a deepfake. Rapid advances in artificial intelligence have made deepfakes far easier to create, and they are spreading rapidly throughout the internet.

As we have written, the national security realm offers many potential dangerous uses of deepfakes, such as fake videos of a commander telling soldiers to slaughter innocents or political leaders making offensive comments and shattering morale. These are hypotheticals now, but we expect to see real examples soon. When Russia invaded Ukraine in 2022, Russia released a deepfake of Ukrainian President Volodymyr Zelenskyy telling soldiers to lay down their arms and surrender: a taste of things to come.

Yet despite the risks they pose, in some narrow circumstances, democratic governments may consider producing and distributing deepfakes themselves. Indeed, the U.S. government has already considered using them for information operations. Imagine, for example, preempting a planned genocide by using a deepfake to (falsely) show chauvinistic leaders telling followers not to use violence or, alternatively, using patently absurd deepfakes to educate the public about how easy it is to manipulate images, perhaps in response to an adversary’s use of a deepfake that is gaining currency. Although the U.S. and other governments have long used an array of tools for information operations, the possibilities deepfakes provide to rapidly manipulate words and images is endless; deepfakes may allow government officials to respond to events in near-real time and create convincing video that can advance their countries’ interests.

Though there may be constructive uses, these need to be balanced against the many potential harms that deepfakes might create. This is especially true for governments, whose use of a deepfake—however justified in one narrow or even major area—might discredit their broader efforts to inform their publics and the world, leading citizens to doubt government warnings on the spread of disease, terrorism threats, and other grave dangers that might last for many years. Deepfakes may also go against guidance put out by the U.S. Department of Defense on artificial intelligence that calls for a principled approach to the weaponization of artificial intelligence.

What is necessary is an interagency process that brings together a broad range of stakeholders—one that examines not only the immediate military and security benefits but also the consequences for the long-term information environment, including for ordinary U.S.      citizens. There are other models for such a process, such as the Vulnerabilities Equities Process (VEP) in the cyber realm. When a U.S. government agency finds a qualifying zero-day cyber      vulnerability, it might exploit this vulnerability in an attack on an adversary nation, or it might disclose the zero-day vulnerability to relevant companies so that users and organizations worldwide are better protected, or it might take other actions between these two extremes. The threat an adversary poses, the danger to U.S. systems, the political embarrassment of disclosure, and other factors are all considered by the VEP as it assists government agencies to balance these concerns.

But exactly how the VEP works is a murky process. Much of what we know about it is limited to a publicly released document in 2017. According to this document, when a government agency discovers a potential vulnerability, it must be reported to an Equities Review Board consisting of representatives from several U.S. government organizations including the CIA, Office of the Director of National Intelligence, Department of Defense (including the National Security Agency, or NSA), Department of Justice, and others. These organizations represent different interests, which are weighed as a decision is made.

Let’s take a hypothetical example. Suppose someone at the FBI were to discover a vulnerability in Microsoft Word. As long as this vulnerability is both new and not publicly known, it would qualify to “enter” the VEP process. The discoverer (the FBI in this hypothetical example) would notify the VEP executive secretariat (typically managed by the NSA), who would then notify the VEP points of contact within each of the 10 U.S. government agencies involved within one business day. Any of these participating agencies can “claim” the equity if they have an interest in using it in an exploit. If there is a disagreement among the 10 agencies involved in the Equities Review Board, there is a discussion period aimed at generating consensus. If      consensus is not reached, a tentative outcome is decided by a majority vote—however, this outcome may be appealed to the National Security Council by any agency in the minority. 

In this hypothetical example, offensive cybersecurity experts at the NSA might want to use the discovered vulnerability to target an Iranian drone development facility where Microsoft Office is in use. However, Microsoft Office may be in wide use at the Pentagon, and some parts of the Defense Department may worry that if an exploit on the Iranian drone development facility is discovered, then the vulnerability would be known to the Iranians (and perhaps through them to other U.S. adversaries allied with Iran), who might then use the same vulnerability to target the Pentagon. In such a situation, the Equities Review Board may decide not to proceed with the exploit and disclose the vulnerability to Microsoft. However, had the vulnerability been used to target the internal networks of the Islamic Revolutionary Guard Corps, then the decision may well have been different (exploit, don’t disclose to Microsoft), especially if there was intelligence about imminent attacks. 

An equivalent process for deepfakes would differ primarily because, rather than “discovering” a vulnerability, a government agent would instead be proactively creating deepfakes to exploit perceived vulnerabilities and opportunities within the information environment. The role of private companies would also differ. A social media company might inadvertently host the deepfake (and take it down if detected), making its interests quite different from a case where there is a bug in Microsoft’s software and the company would welcome the responsible disclosure of a vulnerability by a government agent. Further, the potential for retaliation is far greater when compared with cyber attacks. Not only adversary nation-states, but also non-state actors and even individuals, can retaliate. As the recent viral AI-generated image of the Pope illustrates, deepfake generation is now a commodity that anyone can use, even those without any training in programming. The ease with which U.S. adversaries can create credible deepfakes—     either via open-source libraries or consumer apps like ChatGPT (for deepfake text), MidJourney (for deepfake imagery), and Speechify (for deepfake audio)—is increasing rapidly and does not require sophisticated state resources to scale. Last but not least, deepfakes can corrupt the information environment at home, leading U.S. citizens to believe false information and fostering distrust of the government should a deepfake be discovered or revealed. A variant of the Vulnerability Equities Process for deepfakes would need to consider the potential for domestic blowback from a wide range of actors—not just foreign states but also non-state actors, nongovernmental organizations, and individuals.

But the basic principles of bringing together multiple stakeholders in a timely way to discuss the pros and cons of government use of deepfakes should continue in a deepfake review process. Any U.S. government entity that wanted to use a deepfake would have to share it as part of a formal equities process. National security-focused agencies like the Defense Department and the NSA would be involved, but so too would the Department of Justice to ensure legal standards are met. The Treasury Department might be involved to consider implications for the global financial system. Given that deepfakes, unlike cyber vulnerabilities, affect the quality of overall information, there might be representatives who are tasked to consider the domestic ramifications of any deepfake used overseas. Government officials from the Commerce Department would want to consider the reactions from the social media companies, which might be less inclined to work with the U.S. government in the future if they felt their platforms were used to manipulate the companies’ users. Pollsters with appropriate clearances might be needed to weigh in on the potential sentiments of the U.S. and/or foreign populations on a specific use of deepfakes.

Let’s consider a case akin to the 2022 Russian invasion of Ukraine. Say the U.S. military proposes to release a deepfake before the invasion showing Putin bragging about how he can fool his people into believing that Ukraine is run by Nazis and that he cares little if Russia takes massive casualties, arguing that such measures would increase domestic dissatisfaction with the war. Other agencies might point out that a deepfake of Putin would have more impact after the war begins. Still others might oppose the use of deepfakes altogether, arguing that it will discredit future U.S. releases of video that show Russian atrocities or because it would mislead audiences in the United States. Others might wonder how effective the deepfake will be in influencing public opinion against Putin in the first place, and whether there are better ways to achieve the same objective. 

Democratic governments should rarely use deepfakes, but there may be occasions to consider them. Such cases should not be the bureaucratic Wild West. A process that balances different equities and ensures either consensus or high-level approval is vital for deepfakes to be used both effectively and sparingly.


Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.
Chongyang Gao is a Ph.D. student in Computer Science and the Buffett Institute for Global Affairs at Northwestern University. His research interests include security, multimodal generation, and vision-language tasks.
Chris Meserole researches emerging technology, international security, and violent extremism. He is a fellow in the Center for Middle East Policy at the Brookings Institution.
V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science and a Buffett Faculty Fellow in the Buffett Institute of Global Affairs at Northwestern University. He has worked for over 3 decades on the development of AI techniques for national security purposes.

Subscribe to Lawfare