Cybersecurity & Tech Democracy & Elections

Should Democratic Governments Use Deepfakes?

Daniel Byman, Daniel Linna, V. S. Subrahmanian
Thursday, May 9, 2024, 11:10 AM

Governments should weigh the risks of diminishing their credibility when deciding when, if ever, to use deepfakes.

"Facial Recognition 1." (EFF Photos,; CC BY 2.0 DEED,

Published by The Lawfare Institute
in Cooperation With

Deepfakes—fake images, audio, or videos created by deep learning AI techniques—are emerging as one of the most troubling sources of disinformation. Some uses are humorous, such as inserting comedian Jerry Seinfeld into the movie “Pulp Fiction.” Others do good, like the video of soccer superstar David Beckham spreading anti-malaria health awareness by “speaking” in 20 languages. Far more are socially troubling, such as teens creating nonconsensual pornography of classmates. 

Deepfakes, however, are also emerging as weapons of statecraft, with countries like Russia using them in the Ukraine war to create fake videos of Ukrainian President Volodymyr Zelenskyy and senior Ukrainian defense officials telling soldiers to lay down their arms. Countries as diverse as Burkina Faso, India, Slovakia, Turkey, and Venezuela all saw deepfakes used to sway voters and shape public opinion.

Such tools are nefarious, but they are also attractive, even for democratic governments. Deepfakes might be used to impart false orders, so soldiers of a pariah nation do not invade a defenseless country, discredit a dictator who means to target his country’s minorities, or sway a close election to put a more pro-Western regime into power. 

Perhaps not surprisingly, U.S. Special Operations Command reportedly was considering the use of deepfakes for its information operations. This should not come as a surprise: Democracies have long used information operations, including falsehoods, to undermine their adversaries, and altering photos or creating fake documents are time-honored tools. For example, intelligence officials reportedly planted fake notices in newspapers in Muslim countries with the Soviet seal announcing celebrations of the invasion of Afghanistan.

Despite their attractiveness, democratic governments should largely avoid using deepfakes because their use may diminish the credibility of true statements that governments make in the future. A lie told today might forever tarnish the integrity of the liar. But there may be exceptions to this rule. Indeed, in addition to establishing overall principles regarding deepfakes, the U.S. government and other democracies should develop processes to weigh when, if ever, to use them.

Deepfakes are risky for democratic governments to use. Democracy itself depends on information and public trust, and when the government puts out false information, it calls into question whether government information is reliable on topics ranging from the importance of vaccines to a potential threat from a foreign power. The use of deepfakes may also result in a “liar’s dividend,” an environment in which those confronted with evidence of corruption and abuses of power can sow uncertainty and avoid accountability by saying, “it’s fake.” Indeed, the loss of trust may give adversaries opportunities to influence elections, encourage internal conflict, and otherwise meddle more effectively. As grave as this risk is at home, it is even more so abroad, where U.S. credibility is often limited to begin with.

Yet, as with any foreign policy tool, the situation is rarely black and white. International law in this area is not well developed and, thus, is potentially permissive for forward-leaning, tech-savvy government officials. Democracies may respond in a tit-for-tat way to a dictatorship’s use of deepfakes by releasing clearly labeled deepfakes of their own to educate their own and foreign populaces about how easy it is to create these types of deceptive videos. They may also want to use appropriately labeled deepfakes to bring together disparate accounts of a reported genocide or other alleged atrocities in order to vividly illustrate the danger. What’s more, it is possible they want to target a narrow audience, say, an individual leader, with a deepfake in a particularly grave and imminent scenario, such as planting video of a Russian general voicing disloyalty to Russian President Vladimir Putin before an invasion.

When weighing whether to use deepfakes in these circumstances, democratic governments should consider first whether the deepfake will significantly reduce the risk of an invasion or otherwise change the course of events given the many potential downsides. Getting one general to issue fake orders through a deepfake might help in a narrow situation, but its strategic impact is likely to be limited. In addition, governments should consider how big the audience will be: Is the deepfake something that will affect only a few viewers, such as those within an adversary’s intelligence service, or will it enter the internet bloodstream and thus poison information at home as well as abroad? Also, it is vital to anticipate the likely responses to the deepfake, as some adversaries might react to the false information by speeding up an invasion, purging a suspect officer corps, or otherwise leaving the situation in a worse position than it was before the deepfake.

Democracies should also consider whether the deepfake will eventually be revealed to the public—and the answer is that it usually will. News organizations such as AFP and USA Today have set up fact checking teams for this purpose. For example, as early as February 2023, one of the authors identified an audio clip of Chicago mayoral candidate Paul Vallas as a likely deepfake for CBS Chicago. In addition to the possibility of outright detection, a proud government agency might also later leak its clever plan to destabilize an adversary nation in order to show their population that they are zealously guarding national interests. Later generations of AI might also reveal the deepfake, using new techniques that are not currently available. Though deepfake detection techniques—including deep neural network-based machine-learning classifiers and algorithms that identify evidence of editing—are in their infancy today, they are improving by leaps and bounds and are expected to get better in coming years. In the long term, this will weaken the democratic government’s credibility; and in the short term, it may also strengthen the adversary, who can now claim—regardless of the truth—that any negative information is really a U.S.-planted fake.

Because the use of deepfakes may involve knowing an adversary, as well as legal concerns, diplomatic risks, warfare, and other important issues, there should be a White House-run interagency working group with representatives from the departments of Defense, Justice, State, and Treasury and the Office of the Director of National Intelligence to ensure that different perspectives are brought together. If one government entity proposes using a deepfake, this interagency group would review it, with each representative pointing out advantages and risks and, as a group, deciding whether or not to go ahead. Questions that must be considered before approving a deepfake campaign should include whether the proposed goals are acceptable, whether the proposed use will achieve the desired effects, who and how large the intended audience is, the potential harms that it may cause, whether the proposed use is consistent with domestic and international law, and the probability the deepfake can be traced back to the source.

In addition to these national security and legal perspectives, it is necessary to bring in representatives who can weigh the domestic effects—something outside the traditional national security realm—which may require bringing in an appropriately cleared representative from civil society to represent the public interest. For instance, national security agencies might see benefits to a deepfake that creates a perception that a minority group is being repressed in order to discredit a dictator. A civil society organization in the United States that works on voting integrity, however, may point out that members of that supposedly persecuted group in the United States would be afraid for their kin, might lobby the U.S. government with misleading information, or might otherwise suffer and act wrongly due to U.S. government disinformation intended for a foreign audience. What’s more, a health-oriented civil society representative may point out that if the deepfake leaks, the government would be seen as less trustworthy, hurting calls for, say, vaccinations—an issue that seems unrelated to repression abroad but is closely linked to the trustworthiness of the U.S. government. Such perspectives are important because representatives of traditionally national security-oriented agencies acting abroad are not thinking of how their actions affect confidence in government at home or the sentiments of citizens.

Given all of their potential for harm, the spread of disinformation, and other risks, the presumption of these representatives will likely be that democratic governments should not create and disseminate deepfakes. However, because of their potential power to influence foreign adversaries—among other benefits described above—democratic governments may at times choose to use them anyway. Therefore, democratic governments can (and should) take steps to control their use of this potentially dangerous tool to avoid undermining citizen trust in government and prevent many potentially embarrassing and dangerous mistakes they might make in the future. With the rise of artificial intelligence and foreign adversaries’ deployment of other types of disinformation campaigns, citizens of democratic governments (including the United States) likely already have a difficult time discerning what information is real from what is fake in the vast online ecosystem—including information from their government. Democratic governments have the opportunity to employ safeguards in the form of a multiagency committee—with members from civil society—to weigh the risks and benefits of using deepfakes and consider how they should be deployed (if at all). Taking these steps would help to maintain public trust in government, a vital component of national security.

Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.
Daniel W. Linna Jr. is a senior lecturer and Director of Law and Technology Initiatives at Northwestern Pritzker School of Law and McCormick School of Engineering,
V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science and a Buffett Faculty Fellow in the Buffett Institute of Global Affairs at Northwestern University. He has worked for over 3 decades on the development of AI techniques for national security purposes.

Subscribe to Lawfare