Cybersecurity & Tech Intelligence

The Lawfare Podcast: How Should Governments Use Deepfakes?

Eugenia Lostri, Daniel Byman, Daniel Linna, V. S. Subrahmanian, Jen Patja
Tuesday, March 12, 2024, 8:01 AM
Discussing the potential benefits and risks of deepfakes.

Published by The Lawfare Institute
in Cooperation With
Brookings

Progress in deepfake technology and artificial intelligence can make manipulated media hard to identify, making deepfakes an appealing tool for governments seeking to advance their national security objectives. But in a low-trust information environment, balancing the risks and rewards of a government-run deepfake campaign is trickier than it may seem.

To talk through how democracies should think about using deepfakes, Lawfare's Fellow in Technology Policy and Law, Eugenia Lostri, was joined by Daniel Byman, Senior Fellow at the Center for Strategic & International Studies and professor at Georgetown University; Daniel Linna, Director of Law and Technology Initiatives at Northwestern University; and V.S. Subrahmanian, the Walter P. Murphy Professor of Computer Science and Buffett Faculty Fellow at Northwestern University. They recently published a report examining two critical points: the questions that a government agency should address before deploying a deepfake, and the governance mechanisms that should be in place to assess its risks and benefits.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Audio Excerpt]

V.S. Subrahmanian

There were some differences in terms of the perspectives on when to use deepfakes. I'd say that overall, there was one person who we interviewed who was more or less opposed to the use of deepfakes under any circumstances, but everybody else provided a much more nuanced picture. They were unwilling to forego the use of deepfakes, a priori, but to instead articulate a clear process and a set of guidelines and guardrails that would decide when a deepfake is used and when it is not.

[Main Podcast]

Eugenia Lostri

I am Eugenia Lostri, Lawfare’s Fellow in Technology Policy and Law, and this is the Lawfare Podcast, March 12, 2024.

Progress in deepfake technology and artificial intelligence can make manipulated media hard to identify, making deepfakes an appealing tool for governments seeking to advance their national security objectives. But in a low trust information environment, balancing the risk and rewards of a government-run deepfake campaign is trickier than it may seem. To talk through how democracies should think about using deepfakes, I was joined by Daniel Byman, Senior Fellow at the Center for Strategic and International Studies and professor at Georgetown University; Daniel Linna, Director of Law and Technology Initiatives at Northwestern University, and V.S. Subrahmanian, the Walter B. Murphy Professor of Computer Science and Buffett Faculty Fellow at Northwestern University. They recently published a report examining two critical points: the questions that a government agency should address before deploying a deepfake, and the governance mechanisms that should be in place to assess its risks and rewards.

It's the Lawfare Podcast for March 12th: How Should Governments Use Deepfakes?

Some weeks ago, at the Munich Security Conference, many of the major tech companies signed a commitment to adopt reasonable precautions to prevent AI tools from being used to disrupt elections. Now, this is one example, but I think it represented general concern that we have this year about the way that AI can help erode democratic institutions. So when you look at the current environment, what are some of the real or potential uses of deepfakes that you are concerned about?

V.S. Subrahmanian

So, I can think of many ways in which deepfakes can affect the outcome of the election. In our report, we talk about how a democratic government might use deepfakes to influence the outcome of an election that is likely to be stolen by the leader of a not so democratic country. But in the same way, an adversary state can target an election in a truly democratic country by sending out deepfakes that describe the leading contender in that election, perhaps somebody that adversary state doesn't like, in painting him or her in an unflattering and dishonest light. It may likewise boost the profile of somebody who is not a leading contender but is a preferred candidate for the adversary state.

Eugenia Lostri

So, when it comes to mis- or dis- or mal-information, whichever term you want to use, you don't necessarily need high degrees of sophistication to achieve the results that you want. And in some cases, the malicious intent you may have can be pursued by flooding the space with what are called “cheap fakes” or just photoshopped images because the point is not necessarily to convince people that what you're saying is true, but rather to lower the trust environment in general. So do you think deepfakes pose a new type of threat, or can we look at our experiences with other misleading media to gauge what are the risks and benefits that the technology poses?

Daniel Byman

I would say that deepfakes, while certainly not new in the type of danger they pose, do have their own, I would say, unique or at least unusual characteristics. So, as you said we've had Photoshop. So you can change photos. If you want to go back in history, you have, whether it's political cartoons or even coinage used to attack rival leaders. Numerous states have put out a torrent of propaganda and falsehood over the centuries. So certainly disinformation, misinformation are time-honored tools, but deepfakes have several advantages over their predecessors. One of which is they're increasingly realistic, where if you go back five years and you look at some of the deepfakes, you could see, even to the naked eye, some indicators that this was simply generated content and not real. Every week it seems, they're getting better and better. In addition, you can churn them out at scale. This isn't something that needs to be done in painstaking work by a few experts in a dark room. You can do massive numbers of these things, and really a wide array of actors can do massive numbers of these things.

I would stress your point that one of the biggest impacts is going to be the worsening of the overall information environment. But there may be certain circumstances where it's not just that people disbelieve, but that they actually believe. They see something that is a deepfake, and they're convinced that it's true or at least they may be for 24 hours, 48 hours, 72 hours, and that may be a critical junction.

Daniel Linna

Yeah, I think this is a great aspect of this to think about is just the way that it affects the whole ecosystem. And I think a lot of the discussion, it seems like in the past was about people need to be educated, they need to be aware. And that still remains true, of course, but now, because the misinformation is so credible, the deepfakes are so convincing. And one of the things I hear people saying as a solution to this is “Well, I tell everyone, you can't believe anything that you see.” And it's like, wait a second though. To what extent can that be a solution? If we live in a world where trust nothing? This so-called liar's dividend. So the way the whole ecosystem is changing in connection with these threats is really interesting to consider.

Eugenia Lostri

Yeah. So for your report, which looks at when and how a government might want to use a deepfake, you developed five hypothetical scenarios, and you then conducted some interviews with leaders in the field to assess those scenarios. And while we don't necessarily need to do a one-by-one description of each one of these scenarios, I do recommend that anyone listening who's interested should go read the report, which is really interesting. Could you maybe give us a sense of what are some of the specific challenges that a government official as a practitioner may be facing when they find themselves tempted to use deepfakes as a solution?

V.S. Subrahmanian

So a government official usually is concerned about his or her mission. So regardless of whether you're very high up in the chain of command, or at some moderately high rank in the government, you have a job to do, a mission to accomplish. And if you feel that deepfakes can help you accomplish that mission, then you are tempted to use it. This is particularly the case in scenarios where there is urgency to the issue at hand. So somebody who's tasked with achieving a mission by, let's say in 24 or 48 hours, is at a crossroads. They've got to act quickly and decisively in order to achieve that mission, and they've got to use whatever tools they have. Those are the kinds of people who I worry will use deepfakes without thinking through the broader ramifications of that use.

Eugenia Lostri

Could you maybe expand a little bit or give us a sense of what do those scenarios look like? When you're talking about the urgency, give us an example, maybe one of the more striking hypotheses that you put forth in the report.

Daniel Byman

One of the more difficult scenarios we put forward to the people, the experts we talked to on this, was a situation where there seems to be an imminent genocide. So you have really the worst possible political event of the possibility of huge numbers of innocent people being killed. And in that circumstance, would you consider putting out false information to reduce the risk? And that one’s extreme. And then we tried to look at an array of more traditional scenarios ranging from things like invasion or discrediting a leader to a business case, where there might be a business that needs it. But in all of them, as V.S. said, there was a sense of urgency, that something's going on, it's really bad, and we have this tool, should we use it?

Eugenia Lostri

I was interested, as I was reading your report, if, in broad strokes, did you find particular differences in the ways in which leaders, which you interviewed people from different regions, from different industries, they bring different perspectives to the question. Do they have very different solutions or very different considerations when looking at these scenarios or was there overall an alignment between them?

V.S. Subrahmanian

I think there was a substantive amount of alignment between the people we interviewed. All of them agreed that deepfakes should be used with extreme caution, and in particular, that deepfakes that target leaders, especially leaders of foreign governments, major social or religious leaders, should be used with great caution because it compromises the integrity of the government that uses the deepfake, or at least has the potential to compromise that integrity. So I think everybody agreed with that.

There were some differences in terms of the perspectives on when to use deepfakes. I'd say that overall, there was one person we interviewed who was more or less opposed to the use of deepfakes under any circumstances, but everybody else provided a much more nuanced picture. They were unwilling to forgo the use of deepfakes, a priori, but to instead articulate a clear process and a set of guidelines and guardrails that would decide when a deepfake is used and when it is not. And those are the questions that we tried to study in this report.

Eugenia Lostri

And that's exactly I think that your proposal--here's a hiding guiding framework with several questions that you should be asking yourself. There should be a process when you're considering using a deepfake is particularly interesting. So maybe, just tell us what are those questions, if you can just go through them so that our listeners have context for the conversation to come.

V.S. Subrahmanian

So we came up with a total of seven questions, and the first question really was, is this deepfake that is attempted going to achieve the goal that we seek to achieve with the deepfake? Or will it be uncovered as a deepfake and fail? Second question, who is being targeted by the deepfake? Is it a huge audience which is going to see this deepfake? Or is it a very small set of people who are going to see the deepfake? Third, are any civilians going to be harmed because of the use of deepfake? Are we going to put people in physical danger because of the use of deepfake? The fourth question was about international law. Would the use of the deepfake be compatible with international law? The fifth was, what is the nature of the specific person or persons depicted in the deepfake? Are they world leaders, social leaders, leaders of major religious movements? If so, we should treat those kinds of deepfakes with greater caution than those of a lower rung person who's likely to attract less attention. The sixth question was, what is the goal of the deepfake? What is the purpose that it is intended to achieve? Is it to protect, for example, U.S. persons from immediate harm? Is it a tit for tat use? And last but not least, is traceability and blowback. Is it possible, how likely is it, that the deepfake will be traced back to the originating country, the country that created the deepfake? And what is likely to be the blowback in the event that that is attributed correctly?

Eugenia Lostri

Great. Thank you for that overview. The scenarios that you present in your report mostly focus on ways in which the deepfake could be used to change people's minds, to course correct ahead of this very grave harm happening. And when we look at that first question of efficacy, it seemed to me like maybe honesty remains the best policy. Is that correct?

Daniel Byman

I think it's fair to say that, in general, honesty remains the best policy. I don't want to speak for my co-authors, but I feel that many of us came in being a little leery of absolutes, that you would never do this, or for that matter, always do it. But at the same time, also being skeptical that this should be a regular tool of statecraft. And on that initial question of efficacy, there are often lots of other things that might be tried first and might have a better chance of working, and even if you are considering a deepfake, there may be a lot of reasons to think it's not going to succeed, it could be easily denied, or might at times even make things worse. And then, going directly to your question, at times the answer might be, take a genocide scenario, publicizing the truth. That actually may be more powerful, at least to certain audiences, than spreading lies about a particular dictator's actions.

Daniel Linna

I think that also highlights just the need to think about these different categories together, of course, too. And it's really difficult to generalize. And that's what the point of doing this work, is to start getting closer to specifics and thinking about the details around this. Because, for example, the idea of a leader of a country. If the goal is to get the leader of the country to think that one of the leader’s generals is no longer faithful, well, then you could think about a very narrow audience. Efficacy changes perhaps in that sense and in particular with the timing and things around the context. So I think it's important to ask these high-level questions. And generally, I think the conclusion is here that to think about being faithful to what the truth on the ground and things like that is going to maybe be the best policy generally on the ground. But there are going to be circumstances then where, particularly considering these other criteria, where you can see that a deepfake might be effective in that situation.

V.S. Subrahmanian

I'd like to add one thing to that, which is that unless we know that all our adversaries are going to agree not to use deepfakes, it may not be a bad idea to keep it as a strategic instrument that can be used by our government at a time and manner of their choosing and with great caution.

Eugenia Lostri

So that's an interesting point because there seemed to be some positive feeling around the idea of maybe using a deepfake or creating false media around true events. So when there's truth behind the allegation, maybe some of the people that you interviewed felt a little bit more comfortable with a government creating false media that would depict that. But wouldn't you fall into the same trap here? Even if there's truth to the allegations, false evidence could end up undermining the claim.

Daniel Byman

I think that's correct. Let me say the positive case first, but then go to the negative case. So, on the positive side, although the particular image may not be true, the broader concern that all the people we reviewed had was that you're really polluting the information environment in a way that is making people effectively less informed. But that if you're able to say “Look, we have a lot of evidence that indicates X, Y, and Z, so we've tried to create a composite that we feel represents that, even though that composite itself is not drawn from real images.” You may actually be making people smarter. The particular example may have happened in reality, but you're advancing people's understanding in a way, in my mind, that good fiction does. Where you're reading a novel about a war or something horrible and, of course, it's a novel. But at the same time, it makes it more vivid, it makes it more real. And you could simply label it, say that this is an artificial representation of what we think is going on. This video is not true, but we think, based on actual events, that this is happening.

But, as you say, even a small amount of falsehood could allow actual images to be described. That when very brave reporters or local citizens who, in many cases, risk their lives to bring things to the international eye, that that could be discredited. That's a huge danger. So I think the idea of at least labeling artificial images as artificial has real advantages.

Daniel Linna

Yeah, this idea of beneficial deep fakes, we didn't explore it a lot in this piece, but talked about a little bit and you could think of ways in which that could be beneficial. Transparency seems to be key there, that if everyone knew it's a recreation or something like that based on what actually happened. But of course that does open up a lot of questions, even where you have accounts of events, there can sometimes be very different ideas of what actually took place or what actually people's intentions were and things like that. So it could definitely be messier than we might acknowledge here, but to see the potential there for some possibilities that could be beneficial.

Eugenia Lostri

Connected to this point is a question of who the audience is for the deepfake. And I think we're very used to talking about the harms that deepfakes can cause when they spread online. We already mentioned the harm to the overall information environment. But as you mentioned in your report, if a government is hoping to use one, they could, for example, plant it as false intelligence and hoping that only a few targets would actually have eyes on it. It's not really about changing the hearts and minds of a broad population, but affecting the decision making of very specific targets. So what are some of the challenges that that type of operation entails that are different from the spread of false information on social media?

Daniel Byman

There are a few challenges I would highlight. One is, of course, you need very precise intelligence and capabilities to make sure that it is planted in the right place. So notionally, let's say that we're trying to convince an adversary leader that one of their generals is just disloyal. That's probably something that's not of major concern to vast audiences, but is of tremendous concern, of course, to a particular leader. But it would require being planted in just the right place in some way that is believable that the evidence is credible. And this is something that is always a challenge with planted intelligence. If you have fake enemy orders that you're trying to use for a deception operation in war, how do you get those to the enemy in a way that is credible? And there are these legendary stories in the history of intelligence where successful services have gone to great lengths to do so. So that's one challenge.

Another challenge, of course, is making sure that the image stays within a narrow confines. Sometimes, of course, the other side may help with that. So if we take the disloyal general example, a leader like Putin wouldn't want it known that leaders are disloyal. So he has an interest in keeping it limited just as the U.S. or another intelligence service would if they planted it. So you can imagine scenarios where everyone wants is quiet and thus reduces the risk of poisoning that information environment. Having said that, there still are some dangers. One is that you're simply wrong. It does get out and you misjudge the circumstances. And all of this were fallible. But another is that frequently operations, including successful operations, later leak. And one can imagine someone who wants to brag a little bit about their accomplishments or that of their agency or increase their budget says, “Here's how we fooled Putin,” and get credit for it. And as a result, six months later, this is out there in the ecosystem and does have that poisoning effect, even though the original operation, narrowly defined, could be judged as a success.

Eugenia Lostri

So the third question in the framework that you propose is who might be harmed by the deepfake and whether innocent civilians are going to be harmed. And I could imagine that ascribing causation to one false video could be challenging, right? Because we say, well, you put this out there, then there's civil unrest, and then people died, and it's all because of the deepfake. And that might be, color me a little bit skeptical about the influence that one particular video may have. So how do you especially in the complex environments that you described in your hypothesis, so how do you navigate that complexity to make sure that, in particular, this question remains useful as a guiding principle?

V.S. Subrahmanian

I'd like to talk about a different example we proposed in our article, which deals with intellectual property theft. And I think the example we gave was one where a foreign government steals intellectual property on, say, the design of some sophisticated engine or missile. However, the company that created that design in the first place, the victim of the intellectual property theft, have generated, let's say, 99 fake versions of that original design. So there were a hundred versions of the design in all on their systems. When they were hacked, all hundred were stolen by the thief and the thief didn't correctly identify the correct design. They got fooled and chose one of the wrong designs. They executed on the design, and it literally blew up in their face, killing a certain number of innocent civilians who were working on the project at that time. That's an example of a tangible harm that might occur. I think most of the people we interviewed in that case felt that this harm was not caused so much by the fakes that were planted by the enterprise that was hacked within their own network. They felt that the blame for this was clearly due to a criminal act perpetrated by the entity that stole the intellectual property and that therefore, despite regret at the loss of life, which all of us would certainly feel that the apportionment of the blame was clearly on the shoulders of the party that stole the data.

Daniel Linna

I think the question about causation is a really interesting question as well, because I think it also highlights the fact that it might not be until after something happens that we might be able to piece together what caused these events to unfurl. But I think if we take a step back, the purpose of this is to have a process. The idea is--and maybe we ought to talk a little bit even about that process a little bit and who's involved in that process, because I think one of the concerns here, of course, is that there's not going to be time taken just to stop and ask these questions and simply asking that question, identify who could potentially be harmed here and how might that come about? And having the discussion, going in with eyes open. And of course, sometimes someone could end up getting harmed that hadn't been considered the first time around. But if you also have a process, then ideally that process is one that allows you to learn as you make decisions and further consider these things as you go. So I don't know if Dan and V.S. if you want to talk a little bit too about the parties involved in that process because that seems pretty important too.

Eugenia Lostri

Let's talk about the process. I do have more questions about the specific questions that the process is supposed to bring up. But you do recommend creating a deepfake equities commission. So tell me a little bit more about that. What does it entail and who would be a part of it? How do you envision this working?

V.S. Subrahmanian

The way we envisage this is that when an entity in the government is contemplating the use of deepfakes for some operational purpose, they will propose to a deepfake equities commission the rationale for using the deepfake. They'll give some details about what deepfakes they're trying to use, and they ideally should answer the seven questions we've articulated in our report. The Deepfake Equities Commission, which we consider to be an entity that consists of something operating out of the National Security Council, including representatives from multiple agencies such as the DOJ, the DOD, ODNI, and others, will look at this. Over time, the Deepfake Equities Commission would have learned a number of lessons. They would look at the questions that go beyond the specific use case and consider the broader implications. Such a commission would also have representatives from civil society who would look at, of course, appropriately cleared, who would look at the impact of the deepfake on the national discourse, trust in the media, trust in the government, and more. So, I think, over time, the hope is that the Deepfake Equities Commission also learns and refines this process that we've articulated.

Eugenia Lostri

You sound fairly optimistic that this commission, which would involve all these different stakeholders, would be able to reach a decision, or present the information, discuss, reach a decision, all in a way that would still allow you to have a rapid response to the crisis that, as we mentioned earlier, requires a rapid response. And maybe that's why we're talking about deepfakes, is because it allows, that's the benefit of the technology, the potential for a rapid response, I would imagine. Do you think I'm being a little bit too pessimistic here? I wonder whether a team that needs to come together, have these conversations, would maybe undermine that rapid response element.

Daniel Byman

I would say, of course, anytime you use the word bureaucracy, the word that does not come to mind as rapid or efficient or responsive, we could go on. However, one thing that bureaucracies tend to be very good at is systematizing procedures. The first time people meet, that's a multi-day, probably multi-week, multi-month effort, where people figure out their positions, try to understand those others, on and on. So I can imagine in those first situations, things moving very slowly. But over time, a lot of the arguments are actually solved and not always solved for the best. But people figure out areas of agreement and disagreement. And they're able to rapidly use analogies. “We approved it in this case two weeks ago. And this is similar to that case except it has this one different characteristic. So let's discuss that one different characteristic and see if that's disqualifying or not.” You can imagine a host of circumstances over time where this goes quickly.

One thing I'm more familiar with is the debate about the targeted killing of terrorist leaders. And that was something that for obvious reasons, was a huge deal and got the most high-level attention initially, but over time became, for better or worse, a more routinized bureaucratic procedure, and it still would be elevated if you were talking about the strikes that were particularly controversial. But at the same time, you had a number of actors who knew the rules for better or worse, and they were adjusted as new administrations came in and new concerns emerged, but you went from slow and cumbersome to bureaucratically, at least, relatively efficient. V.S., I want to actually have you talk a little bit on the cyber side, because I think that's a really useful way of thinking about this.

V.S. Subrahmanian

Yeah, thanks, Dan. So in cyber security, there is something called the vulnerability equities process. And our deepfake equities process is to some extent inspired by that. So in the vulnerability equities process in the U.S., and there are similar processes in many major democracies around the world, when a cyber vulnerability is discovered by a U.S. government entity, it is reported to the vulnerability equities process, and that then is considered for one of two actions: disclose to the vendor whose product has that vulnerability so that that vulnerability can be fixed. So that would be the case, for example, if a vulnerability is found in an extremely popular product, such as a specific web browser or a mail client that all of us or many of us might use. Millions of people might use. Or a router that is used all over U.S. networks. So in cases like that, that provides an opportunity to carry out an offensive cyber operation for U.S. intelligence or defense agencies against a foreign state adversary. But potentially there's also a risk that adversaries who know about that vulnerability can use it against us to great effect. So the vulnerabilities equities process in the U.S. government tries to measure the balance between offensive use of this for perhaps some limited period of time versus disclosure. If you use it, for how long are you going to use it and then disclose it to the vendor to fix. If you're not going to use it, you should disclose it immediately.

So there's a process which also involves a multi-agency committee that looks at this. They actually have deadlines on how long each step of the process should take. And there, the use of the cyber vulnerability may take a little longer. There may be more time available to make that decision than perhaps in the case of a genocide, which we're contemplating here. So there are a lot of similarities, some differences.

Eugenia Lostri

So let me dig into that a little bit more because I do think it's interesting to draw these parallels. But would you envision then the Deepfake Equities Commission also flagging when they find a deepfake in the wild? So there would be this discussion that is maybe more akin to, well, we have this vulnerability, is it in our interest to disclose it or we can exploit it? Or is in the deepfake case, the conversation just limited to future operations by the government?

Daniel Byman

Our thinking was future operations by the government, but in practice, they would be discussing deepfakes writ large. So as these things emerge, that would be part of the discussion. So if there were a deepfake, I'm making this up, but let's say that the government of Brazil used, against a neighbor. That people would look at the consequences will look at the questions we've looked at and say, how does this work in practice? And let's be honest here. We tried to say very clearly in the paper. We think we're beginning the process of thinking about this. So there may be new criteria. They say, hey, something really bad happened and the questions we're asking don't answer that. Let's add a new question or new factors to consider. And those sorts of experiences are going to shape the probabilities, if you will, of different outcomes and how people think about this. So I think there would be, at least I would hope, a fairly rich discussion of broader deepfakes going on in the world, but the specific focus of what we imagine would be when someone in the U.S. government is proposing to use one of these for the interests of U.S. foreign policy.

V.S. Subrahmanian

I want to add to that an essential part of any form of warfare is to understand your environment, understand your adversary, understand history. We need to understand the Deepfake Equities Commission is going to have to understand what other actors did with deepfakes over the years, even if those were not targeted at us. They would need to understand how successful those deepfake campaigns by other actors were. It's very similar to what we're seeing in terms of the use of underwater drones by Ukraine in the war that Russia unleashed a couple of years back. Ukraine is successfully using underwater drones to target Russian ships, and everybody's looking at that to see what we should learn from it. And in the same way with deepfakes, as they are used, regardless of whether the United States is a participant in a conflict or not, we should keep track of what's going on, understand how deepfakes were used, understand how that impacted the targeted population, understand the decision making of the targeted leadership, and figure out what the takeaway messages are that we need to learn.

Eugenia Lostri

Now, one more question about the commission. When deciding how to design the solution, is there a reason why you chose to focus specifically on deepfakes and not make this commission maybe more technology neutral? Because as we've established, there are other ways in which you can flood the information environment in which you can try to convince your adversaries to behave differently. So why deepfakes? We've touched on this before, but it's hard not to come back to it. Many of the concerns that we've raised hold true for many other types of information operations. So why deepfakes? Why do they require this specialized process?

Daniel Byman

I think you raise an excellent point that this could be a process simply for deliberate use of myths or disinformation, and that could go to many forms. We focused on deepfakes because they are relatively new and because of their potential power. We also believe that in contrast to some of the earlier forms of mis- and disinformation, they could have profound consequences for U.S. audiences and their faith in their own government. And one of the hardest things that at least I tried to wrestle with when we thought about the process was really who should represent the American people. And that sounds almost like a silly philosophical question, but when you're thinking bureaucratically, who has bureaucratic responsibility for the integrity of the U.S. information space? And it's different from something like who's responsible for knowing the adversary, then you'd say the intelligence community. Or who's responsible for determining the legality, you could turn to the Department of Justice, you can go, fairly obviously, for many of the questions. But since one of the biggest potential harms is this question of faith in government, I'm not sure who you turn to, especially in the government, because historically governments are not particularly good at judging that question. We try to think of ideas where we might involve broader civil society and bring them into this process. That has its own difficulties. And I would say that's still an area that I would love to focus on more in future research. But to me, that's one of the big challenges going forward in this space.

Daniel Linna

Yeah. I'll just add that I think that we're recognizing the potential tremendous impacts that deepfakes are going to have. And I think we’ve even seen research where when people know that something is fake, the impact that even then it can have on people. And just to circle back a little bit on the Equities Commission, we should say for the listeners, who we hope they'll go read the paper, but if you do read the paper, you also see that we talk about many of these issues would go up to the president. This is serious. These are serious questions. It's important to get in front of this, have these discussions. It will happen fast, these scenarios we talked about. But by having those discussions, I think Dan, V.S. mapped out that we our scenarios, other scenarios, there are certain things you expect that will fall into different categories and have a sense how you might address them, have those discussions ahead of time. But when those decisions need to be made, the president is going to be involved in a lot of these decisions or the like position person in other countries.

V.S. Subrahmanian

I want to add one more comment, which is that it is not always going to be adversarial governments that want to use deep fakes against us. It could be just an individual. And as I'm based in the Chicago area, I'll take an example of a Chicago man who I think about a year back created these deep fakes of the Pope in what made the Pope look extremely dashing, picture of sartorial elegance in these puffy jackets. And I don't think he intended anything malicious by that portrayal of the Pope, but what was interesting was this was a person who had very little training in computer science. He just used off-the-shelf tools to produce these fantastic-looking images. That is something that could be turned by a single individual anywhere in the world against us or someone else. We really have to think thoughtfully about not just what our government will do, but what other individuals and other governments may do, as well as what people in our country might do as individuals, rogue individuals, targeting foreign leaders.

Daniel Linna

That's such an interesting point. And we alluded to this in the article, at least one of our interview subjects, at what point does someone in the U.S. do something targeting another country, China, and then China holds us accountable, holds the U.S. government, essentially accountable for that. And how are we going to have to think about actions in this space to determine what threats could arise because of actions of, like people like the individual you just identified.

Eugenia Lostri

Yeah, and thank you, Dan, for bringing that up, because I wanted to connect what V.S. was saying to the final question in your framework, which is that of traceability and the effects of attribution. Because it's not just governments, it's not just this accountable system, but there needs to be a response from the government, I guess, when you have all these different actors trying to influence the space. So, so tell us a little bit more about that question, that of traceability, of attribution. What are some of the concerns? But also what are some of the options that this commission could look at?

Daniel Linna

Well, I think this is a big question now, isn't it? V.S., this is something you've been studying for a while, but if you read some of the headlines, or I hear people frequently say, I spend a lot of time talking to judges, for example, and they're concerned about deepfakes coming in the courts, and, “Well, let us know when the computer scientists solve this problem so we don't have to worry about it anymore.” And, well, I don't know that there's a computer science solution coming anytime soon.

V.S. Subrahmanian

I'm laughing Dan, because everybody looks to people like me and says, “Aren't you guys solving this problem?” And I'm sorry to say, technology is not quite there yet. So in my lab at Northwestern, we've run lots of experiments trying to figure out how existing technology, how good it is at detecting whether a particular digital artifact, audio, video, image, whether it's a deepfake or not. And the technology does not do very well. That's what we realized. What I see is something that brings together a combination of people who are trained, technology that those people are trained to work with, where they understand the strengths and the weaknesses and the limitations of the technology, and a process that brings those people in the technology together in a way that covers the bases. And in some cases, those people and that process and that technology might lead to an inconclusive answer. In some cases, they'll say, “Yeah, this is a deepfake. Here's why.” In some cases, they'll say, “No, it's not a deepfake. It's real. Here's why.” But in some cases, as technology gets better and better at generating these things, I fear that there will be certainly cases where we can't tell for sure.

Daniel Byman

One point we make in the paper, and yes, actually I want to give you credit for making this point, it's not something I had thought about before this research, was that even if right now a government does use a deepfake in a way that is not traced and thus is deniable, it's quite possible in one or two years there'll be better technology that reveals this. So even if government doesn't leak and it's narrowly kept, all the boxes are checked, the future is uncertain on this. And one of the challenges is revealing today's deepfakes in real time. But to me, it's at least more plausible that yesterday's deepfakes might be revealed one or two years down the road with more powerful computing and simply better design.

Eugenia Lostri

Taking all of this into account, I have to say I was a little bit surprised to see those very three clear cases where you had agreement that it would be acceptable for a government to use deepfakes. And V.S., as you mentioned this before, it would be an immediate threat, a tit for tat response, and the question of education and discrediting. So, I was struggling to maybe come up with my own hypothetical of what would that look like? Can you give us an example in which all of these questions that we've been talking about, the potential for it to be attributed later, the harm to the environment, the potential for escalation, in which those questions don't pose a fatal challenge to, should we move forward with this deepfake?

V.S. Subrahmanian

The one hypothetical that I think we have fairly good unanimity on is intellectual property theft, where a company generates fakes and puts it in its own network so that when an adversary steals information from that company, they take the real thing as well as the fakes and they've got to sort it out. I think most people felt that this is a case where companies do have the right to protect their investments, often millions, hundreds of millions of dollars of investment, from unscrupulous foreign actors who steal that intellectual property. So I think in cases like that, it's to me, okay. It's because that's not a deepfake that's being put out publicly. It's not a deepfake that is targeting a specific individual. It's a deepfake of a process or design a new pharmaceutical drug which is intended to cure some deadly disease. So I think it's fair for people to, or companies, I should say, to protect their IP in that way. So that to me was a fairly clear-cut case. Almost everybody agreed with that. There was one person who did not.

Eugenia Lostri

And that case would not be subject to the Deepfake Equities Commission, right? Because it would be just a company deciding to have a honeypot folder where basically they have false information, hoping to confuse someone breaching their networks. So it would not involve the government acting on behalf of a company. Correct? Just want to make sure I got that right.

V.S. Subrahmanian

Oh, yes, you did get that right. But even today, I think companies are worried about liability issues. Dan's a lawyer. He can probably talk more about that. So I don't think the law is entirely clear on the nature of protective cover they have when they use deepfakes within their own network for this kind of purpose.

Daniel Linna

Yeah, and I think your basic point is reasonable that private actors are going to have plans in place to protect their intellectual property, for example, and the U.S. government is not going to be reviewing every one of those actions. On the other hand, I think at some point, I think there are relevant questions here, this idea of due diligence, even, for example, discussed in the cyberspace area, to what extent does a government have responsibility for private actors who are undertaking attacks on other countries. Now, again, this is a little bit different from the way we framed it, where you could envision something that is maybe a bit more aggressive by companies to protect. And this does come up in cyber, the attacking back thing. I generally accept your point, but I do think that there does need to be some consideration of what responsibility does a nation have for private actors in this space.

Eugenia Lostri

Dan, I'm going to stick with you for a little bit longer. I think we've talked about this process, we're talking about the deepfake framework, the questions, but all of this is not happening in a policy or legal vacuum? And as I'm understanding, as the lawyer of the group, if you could maybe tell us a little bit more about the legal considerations that you need to keep in mind when thinking about this process. I think many of the scenarios, or all of the scenarios that you present, they're all set in peacetime, at least between the countries that are involved in generating and receiving the deepfake. There are principles of international law that you might want to keep in mind, even if we don't have specific facts of a case to discuss. What are the legal principles that need to be considered before moving forward?

Daniel Linna

Yeah, the key legal principles to consider here are sovereignty and non-intervention. At some point, some of the using deepfakes could cross into a space of violating a nation's sovereignty, really interfering in elections, national security. I think the key question here is of course, as Dan was talking about earlier, of course there's a history of misinformation and influence operations and propaganda. So there's this whole area of things already that is essentially accepted. It's okay for a nation to have an opinion about who's going to be elected, a leader in another nation. But if it were to cross over into an area of interfering in this reserve domain of a nation, they're choosing political leaders, economic domain, social culture systems, forming foreign policy to the point that it’s coercive. So essentially not giving them a choice to freely make decisions about these things, then that's going to be a problem under international law. Of course, the trick here is finding out, well, when is it really going to cross that line? The way we traditionally think about this, if there was some threat of force, and, okay, now we see, it's pretty clear that we've got a potential issue under international law. With deepfakes, in most situations it's going to be far less clear that it would rise to that level as far as being a problem in these different areas.

Eugenia Lostri

Are those considerations different if you're already in armed conflict?

Daniel Linna

If you're already in armed conflict, for example, now we're going to be thinking about looking at the actions of the other party and what responses you're going to be able to bring. Yeah, so there's going to be a different set of rules that are going to apply if there's already an ongoing conflict with the nation, yes.

Eugenia Lostri

Okay, you're not going to give me anything more than that.

Daniel Linna

Well, I mean, I think what we really focused on here too, we have a couple of scenarios talking about some of these settings where deepfakes may play a role. And we've talked about some that we've already seen in different areas. I think first of all, in the armed conflict setting, the context here is so important. And just like any good lawyer, I'd want to push to pin down the specific facts before I would try to opine on. But we've got these general overarching principles and absolutely in an armed conflict setting, they're going to apply a little bit differently. We did talk in the paper a little bit about this idea of ruses of war are not prohibited. So this idea of misinformation, getting people to think that there are in fact, a hundred thousand troops on the border in one location when there's no one there. So there's well established international law that things like that are not prohibited. If you cross the line in a perfidy, you allow an adversary to believe that, oh, the Red Cross is here and it's safe. You should come here for aid or something like that, when instead it's an ambush, that's problematic. So we have some clear rules in that space that would say certain activities are absolutely prohibited. We've seen some of the use of these tools already in that space and I don't think there's any suggestion that those would be a violation of international law. What we've seen for the most part, but we know where some of the lines are for some of these other activities that could take place.

Eugenia Lostri

Great. Thank you. If we focus on the U.S. and we look at the domestic legal framework, are there existing policies, existing laws that apply here and that would shape what is possible for the government official who is interested in creating and deploying a deepfake?

Daniel Linna

Well, deploying a deepfake in other countries, I'm not so sure that there are there's anything in place that is going to create much in the way of prohibitions. There have been, of course, some laws passed in a handful of states in connection with thinking about the use of deep fakes and elections and things like that to close some gaps in laws. So those of course, in the domestic sphere would need to be considered.

Eugenia Lostri

Now as we're wrapping up, I do find interesting and it's refreshing that you all have fairly different backgrounds and contribute different perspectives to how to solve this problem. So I’m interested in hearing what is maybe a specific lesson learned for you from this research that you think changes or shapes your field.

Daniel Linna

I'll go ahead and go first since I've been talking about the law here for a little while. And this is just an observation about law and technology generally, and I think to solve these problems, we need interdisciplinary approaches. If you're just reading the headlines, it's on the deepfakes problem, for example, it's easy to think like, oh, there are technical solutions to this. And well, it turns out that the technical solutions are probably not going to completely solve the problem. And, so having a deeper, better appreciation for the problems and the expertise that others bring. And I think to the extent we're thinking about the way the law needs to respond, then yeah, it has to be interdisciplinary. And one of the other things that is refreshing as well, that I find refreshing, being able to interact with experts like Dan and V. S. is that it's refreshing to realize there's uncertainty, although, disconcerting at the same time. So these are hard problems. We need to bring interdisciplinary groups of people together to work on them.

Daniel Byman

One thing that I wrestled with really goes back to one of your first questions, which is how to think about what's new and what's not. And as you ask, isn't a lot of this something you've seen for centuries? And, the answer is of course, yes. What's interesting to me is to the extent that deepfakes are different, some of it is also in an information environment where it's really hard to separate overseas versus the United States. And when we talk about intelligence operations, foreign intelligence operations, or military operations, usually the context for Americans is “over there.” But in a globalized information environment, that's not a particularly meaningful distinction. And how to think about this potentially very powerful tool in an information environment where Americans may be seeing it the next day, as well as those in adversary countries. And that question to me is something that I at least am still wrestling with.

V.S. Subrahmanian

I want to add to that, this really is an interdisciplinary problem. Technologists like me, we start out by thinking we can solve everything technologically, but really there's a menu of options. We don't understand when we use deepfakes how an adversary government is going to react. We need experts who've studied those governments in many cases for decades to give us their assessments. We need to understand what the legal instruments are that bear on the problem. Computer scientists like me don't know the answer to those questions. And that's why I talk to people like Dan Linna. And we don't understand necessarily how a foreign intelligence agency is going to think about an operation that's carried out by the U.S. We need experts in intelligence for that. And so, folks like Dan Byman with deep expertise in history, political science, intelligence are essential to the understanding, to the creation of a holistic framework to think about these issues.

Eugenia Lostri

Well, I think that's a great point to end on. Thank you all so much for joining me today. This was a great conversation and I do encourage anyone who found this interesting to go read the report, see the different hypotheses. It's really an interesting paper. Thank you.

Daniel Byman

Thank you so much.

V.S. Subrahmanian

Thank you so much.

Daniel Linna

Thank you.

Eugenia Lostri

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get an ad-free version of this and other Lawfare podcasts by becoming a Lawfare material supporter at patreon.com/Lawfare. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.

Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6. Check out our written work at lawformedia.org.

The podcast is edited by Jan Patja Howell, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our music is performed by Sophia Yan. As always, thank you for listening.


Eugenia Lostri is Lawfare's Fellow in Technology Policy and Law. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.
Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.
Daniel W. Linna Jr. is a senior lecturer and Director of Law and Technology Initiatives at Northwestern Pritzker School of Law and McCormick School of Engineering,
V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science and a Buffett Faculty Fellow in the Buffett Institute of Global Affairs at Northwestern University. He has worked for over 3 decades on the development of AI techniques for national security purposes.
Jen Patja is the editor and producer of The Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare