Cybersecurity & Tech

Lawfare Daily: Rethinking Deepfake Response with Gavin Wilde

Justin Sherman, Gavin Wilde, Jen Patja
Friday, September 26, 2025, 7:00 AM
What has the impact of deepfakes been on information and society?

Published by The Lawfare Institute
in Cooperation With
Brookings

Gavin Wilde, Nonresident Fellow at the Carnegie Endowment for International Peace, adjunct lecturer at Johns Hopkins University, and author of the recent paper, “Pyrite or Panic? Deepfakes, Knowledge and the Institutional Backstop,” joins Lawfare’s Justin Sherman to discuss worries about deepfakes and their impact on information and society, the history of audiovisual media and what we can learn from previous evolutions in audiovisual technologies, and the role that fakery has played over the centuries in said media. They also discuss the social media and political context surrounding deepfake evolutions circa 2015; what happened, or not, with deepfakes in elections around the globe in 2024; and how institutions, policy, and law might pursue a less technology-centric approach to deepfakes and their information impacts.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Gavin Wilde: The line between evidence and expression, between depiction and detection, or between capturing and manipulating an event onto film or tape has always been very blurry. It's always required some secondary or tertiary bits of context and information to make sense.

Justin Sherman: It's the Lawfare Podcast. I'm Justin Sherman, contributing editor at Lawfare and CEO of Global Cyber Strategies, with Gavin Wilde, non-resident fellow at the Carnegie Endowment for International Peace and author of the recent paper “Pyrite or Panic,” about deepfakes.

Gavin Wilde: In a real way, we shouldn't conflate deepfakes’ ability to attract attention necessarily with their ability to displace the way that we formulate knowledge, both personally or as a group.

Justin Sherman: Today, we're talking about deepfake worries, historical parallels with audio-visual media, and a less tech-centric response.

[Main episode]

You've been doing a lot of interesting research for some time now on a range of technology and information and geopolitics and other topics. So undoubtedly, many listeners, they've heard you on the podcast before, they're familiar with your work, but why don't you just tell us a little bit more both about your background as well as the variety of different research topics you're working on right now.

Gavin Wilde: So the bulk of my career has been spent as a Russia watcher in the U.S. intelligence community, particularly attempting to kind of understand and explain to U.S. and Western officials how Moscow conceptualizes the information space, how Russian leaders tend to perceive threats and opportunities in arenas like cyber conflict or propaganda or surveillance and what drives their strategic culture in those areas.

And since I pivoted to the think tank and academic world a few years ago, I've kind of drawn upon that experience to try to understand our own philosophies here in the United States, how and why we think the way we do about information and communication and technology and what that means for how we fight wars and craft our own foreign and security policies.

So if there's a common thread that probably runs through most of my research, it's that sometimes our own process of securitizing or even overhyping issues like cyber warfare or disinformation or artificial intelligence entails its own risks and opportunity costs.

Justin Sherman: We're going to talk just in a minute about that very theme. Today, we're going to cover deepfakes, or fake-but-realistic audio, video, or images that are produced by deep learning or AI technologies. That's not a NIST definition or something, perhaps you have a different definition, Gavin, but just to say at a general level what they are.

We've worried more about this, right, in recent years, policymakers, media, the public, for variety of reasons, right? Some of this is, to my knowledge, this is still the case and based on lots of the data. And unfortunately, overwhelming use of deepfakes today is to create non-consensual intimate imagery, predominantly of women. There was also a lot of concern in D.C. circles and elsewhere, kind of to your point about worrying about things, securitizing things, about politics, about elections, about national security weaponization. So we're going to get into all of that today.

And to do that, I want to start with––we're going to go broader than this one piece, but in part, I want to start with an excellent paper that you recently wrote titled, “Pyrite or Panic? Deepfakes, Knowledge and the Institutional Backstop.” It's a great way to kick off our conversation here. I'm, as I joke, always terrible at titles. This is quite a good title here. So tell us about this title. What does it mean? What is the paper about and what motivated you to write it?

Gavin Wilde: Yeah, so let me kind of address those in reverse order. I originally approached this paper as a follow-up to a previous piece I did for the Texas National Security Review that looked at the risks of overhyping the threat of foreign propaganda online. And so this was intended to be a bit of a sequel. And similar to that issue, we've seen a deluge of very alarmist commentary from think tanks and academics and commentators in the past few years.

A lot of which pivots on this theme that the advent of deep fakes potentially signals this looming collapse in trust in audiovisual media altogether or a major erosion in our ability to collectively make sense or produce knowledge, and ultimately our ability to do those things.

So, for instance, we had some great papers by legal scholars Robert Chesney and Danielle Citron about the so-called Liar’s Dividend which is this idea that, as hard as it is to prove something is true, deepfakes are going to enable bad actors to flip the script and in essence ask us to prove that it's not fake. Or, philosopher Regina Rini had a great paper talking about the erosion of trust in the evidentiary quality of media to serve as what she calls a backstop for human perception, which is often flawed by comparison.

So these, these papers were both fascinating and compelling to me. And so I wanted to undertake an effort to either confirm or refute or add to this body of work, less from the standpoint of the machine learning technology itself, but from media studies and sociology and historical experience. And the key questions I came at it with were, what is the nature of our interaction with audiovisual media? How much do we rely, or perhaps even over-rely, on media to serve as a backstop for our search for knowledge?

And more broadly, are these kinds of speculative futures about so-called artificial intelligence obscuring or distracting from more salient present day harms? And so in terms of the title, I like to use a good analogy that can cut through a lot of complexity and jargon. And in this case, I found that pyrite served that purpose fairly well.

And so for those who are not mineral nerds, pyrite is just the formal name for the mineral that most of us know as fool's gold. So just like deepfakes, fool's gold is often pretty identical to the real thing. And you could be forgiven for assuming that the discovery and the relative abundance of the mineral pyrite compared to that of real gold might've been catastrophic for early prospectors and a bonanza for counterfeiters.

But historically speaking, that wasn't really the case. The value of real gold remained pretty stable despite the discovery of pyrite, and your odds of striking gold didn't actually change. Over time, what happened was the existence of fool's gold, and the broader awareness of that existence, simply placed a greater premium and emphasis on the processes and methods of discovering and validating the real thing.

And as I write in the piece, the worth of pure gold is ultimately a social construct. It's based not only on its relative scarcity in the ground, but on the customs and the disciplines involved in mining gold and assaying gold. And so the so-called gold standard originated not really as a claim of perfection, but as a way to curb counterfeiting through standardized practices.

And so in this same vein, I argue that knowledge is similar. It is and must continue to be kind of backstopped less by advances in digital media than by social and institutional processes like academic and civil society institutions, libraries and library science, localized journalism, publicly accessible digital resources and archiving, standards-setting and credentialing organizations and evidentiary rules, all of which are more social and institutional than necessarily technological.

Justin Sherman: You just did this with the pyrite analogy. One thing I always enjoy about many of your monographs and papers and articles is––what you were just saying, sort of looking at sections of history, distilling parallels that can inform how we think about some issue today that's treated, as you noted, with deepfakes and AI as newer. I'll put that in quotes as we'll get into, you know, maybe some things aren't so new.

So let's walk through that history a little bit. What can we learn about the advent of any new technology and how it's perceived, particularly when it comes to audiovisual media?

Gavin Wilde: So I think one of the major takeaways of the paper and the lessons of kind of audio-visual media over the past two centuries is that we tend to focus on their potential political and epistemological impacts––that is to say, kind of what they mean for how we formulate knowledge––while ignoring their most likely use cases, which are usually to amuse and entertain ourselves.

So in other words, the subtle and often unintentional impacts on art and popular culture tend to make so many of the doomsday scenarios seem in hindsight kind of misplaced. So you have Thomas Edison or the Bell Telephone Company, assuming that sound recordings and telephones would be primarily used for business dictation and commerce, while music production and idle chitchat by housewives weren't really at the front of their mind, but probably should have been.

Or you have these academics in the 1600s thinking that they're going to educate the public about optical illusions using light projection to debunk hucksters and magicians, only to find themselves kind of further fueling this fascination with the supernatural in the process.

One of the other myths that gets kind of debunked when you look at this history is this idea of inevitable obsolescence or this notion that because we've invented a new thing, the old thing or the old medium is now completely done for.

So for example, if you're a painter in the 19th century, you're getting really worried about photography. If you're a book publisher in the early 1900s, you're freaked out about nationally syndicated newspapers. If you're working with typesetting or printing of any kind, the advent of the phonograph spells certain doom for your profession.

Commentators were worried that regional dialects would get watered down if a person in Boston can simply pick up the phone and chat with someone in California. Textbook publishers worried that classrooms would be dominated by motion pictures instead. Record producers lived in fear of broadcasting because after all, why would you listen to a vinyl record at home if you could just tune into the radio? Now, of course, all of these kind of speculative futures were either wildly overstated or some of them were completely outlandish to begin with.

So the point is that our cultural and social practices tend to keep many forms of audio-visual media thriving, including those institutions that validate their authenticity and credibility long after their purely practical functions might age off.

Justin Sherman: We can go into several of those and we had time, would, on Boston dialects and other interesting things. This is a good segue though, back to the deepfakes point, which is, as you had noted up top, there are many ways in which we––and when I say we, you know, policymakers, the media, et cetera––think about deepfakes as new.

And of course, people in, you know, 1650 were not firing up their iPhone and asking ChatGPT had a, you know, make a fake video with it, with another machine learning model. But fakery itself, as you explore in this paper and directly related to what you were just saying is not new. And so what role did fakery play in audio visual media during these past, these past centuries and time periods we're talking about here?

Gavin Wilde: So the advent of photo and audio recordings is pretty closely attended by fakery. For instance, early photos couldn't really capture landscapes very well, so folks would have to go in and paint skylines and clouds on top of the initial prints. And a lot of the images that accompanied stories in early newspapers were pretty routinely doctored like this.

Meanwhile, many early recordings were staged recreations of famous historical speeches and famous contemporary speeches, where many listeners in the audience had no idea whether they were listening to this event happening in real time or whether the voice they were hearing was that of an actor or the actual prominent figure coming through the speaker themselves. And it was really only well into the 1900s that society broadly started to accept or expect that audio visual media would serve as evidence.

And so this illustrates an important point about audiovisual media more broadly that the line between evidence and expression, between depiction and detection, or between capturing and manipulating an event onto film or tape, has always been very blurry. It's always required some secondary or tertiary bits of context and information to make sense.

And so this idea, however much we've evolved away from it, this idea that facts, the whole facts and nothing but the facts can somehow be expressed fully or self-evident within a recording, either in 1865 or 2025, has always been very flawed. And so, for instance, more recently with the advent of Photoshop in like the late 1980s, you see this rapid democratization of the ability to make realistic-looking fakes and a concurrent scrum by academics to do what were called at the time media forensics or the ability to detect subtle traces of manipulation on a digital image or recording.

But ultimately what they found was there wasn't as much demand for media forensics as we predicted. One of the most prominent examples in my lifetime was a famous picture of a guy in a cap standing on top of the World Trade Center in Manhattan with a passenger jet approaching unbeknownst to him just below. That photo proliferated, email inboxes throughout my community and my high school, sparking tons of conversations and speculation.

Now, of course, the video was a fake. It was later found to be made by Hungarian teenager that was looking to kind of joke around with his friends, not necessarily to spark a global hoax. And on the one hand, a complicated digital tool would definitely be able to tell you that that photo had clearly been shopped. But on the other hand, there are plenty of contextual, non-forensic reasons to be highly skeptical at least, for anyone who cared to investigate the provenance of that photo.

And in that regard, historians like Lewis Mumford and Koen Vermeir note that context is the key, that social uncertainty and anxiety always tend to find an outlet in our fascination with illusion. So whether that context is the Enlightenment, or the Industrial Revolution, or the aftermath of 9/11, we can't really separate the cultural appetite for spectacle or escape or illusion from that audiovisual fakery. But we also can't confuse this appetite with some kind of collapse of institutional knowledge.

Justin Sherman: So we're not too far off from the present with the examples you just gave, such as around 9/11, but let's come further to the present.

So around 2015, as you explained, there were great advances made in the capabilities of so-called deepfake technology. This also arrived, though, at the same time, as we'll all recall, as there were lots of concerns and incidents related to social media, to disinformation––take your pick, 2016 election, Cambridge Analytica, know, the conversations in D.C. and in the media about the virality of false information and propaganda––all of which are issues with which you, as you, your work and other stuff you noted are intimately familiar.

So what cultural and political and online context, in a sense, were these 2015-era deepfake developments walking into, and how did this broader context of these other concerns about disinformation or social media impact both the actual role that deepfakes played in society as well as how the public perceived the deepfakes in society?

Gavin Wilde: Right. So, I mean, by the time you have the first papers rolling off, you know, the academic presses about the potential harms from deepfakes in 2015, we already had a very fractured media landscape. We already had a continuing erosion of trust in institutions. You have rising concerns about disinformation and, you know, fakery more broadly online. You've got a rise in authoritarian sentiment throughout, you know, certainly the globe. And you also have an elite class that's looking for answers as to why they lack the authority and the credibility that they used to have.

And then as you mentioned, certainly in the aftermath of 2016 elections, all of the incentive structures kind of existed to look at deep fakes through the prism of a, quote unquote, post-truth and politically polarized world.

So I think, you know, just like we'd place the advent of any technology into that cultural context of its time, it's important to look at the rise of deep fakes with the cavea––or at least the awareness––that we were already well-primed by this point to consider them from the viewpoint of a very vulnerable societal context, and to think first and foremost about how they might be weaponized politically and geopolitically.

But one of the findings that I point to in my paper is that that tends to distract us from, again, this historical trend where amusement and entertainment and cultural, kind of, frivolity tends to be where a lot of these new breakthroughs in audiovisual technology find themselves manifested before a lot of these more weighty subjects, of course.

Justin Sherman: 2024 in some ways was a test case for deepfakes and for some of the dynamics you were just describing. And after almost a decade of theorizing, I like the word you used earlier as well, speculating about the threat of deepfakes in an elections context, 40 of which were taking place last year in democracies around the globe.

How did that actually play out? Give us the facts of what happened, but also in your analysis, like did the worries about election deepfakes and democracies and the threats to public discourse and so on play out in 2024 in the way that folks were predicting?

Gavin Wilde: I would say broadly not. Compared to the degree of concern and hype about deepfake-assisted election interference or influence, 2024 was the dog that didn't bite, despite a few isolated incidents. I held several discussions with platforms and researchers, most of which found that deep fakes weren't really used any more than any other types of content. The ones they found were pretty easily debunked or recognized as being synthetic, and where there was no real broader deception campaign that they might have been linked to.

So, to quote one YouTube executive that I spoke with, the deepfakes we saw, the fakery was the point. So in other words, they were mostly used as a form of joke or political cartoon, but more broadly, there was no previously untapped demand for propaganda or disinformation-type content that deep fakes were somehow poised to now supply in 2024.

And by the way, this lines up with what researchers at Notre Dame University found about the majority of fake content online. They found that it was mostly memes and political cartoons, artistic expression that was more winking than deceptive in its intent.

So the point was less to deceive your mind than it was to tickle your funny bone, so to speak. And that cuts to another key theme of the paper, which is relevance. So according to experts like Dan Sperber and Hugo Mercier and Sacha Altay, humans pretty much ignore most media content that isn't immediately and deeply relevant to them. But most of this fake content exists in arenas like commercial or political advertising, where we tend to engage at a very, very superficial level.

And even when we do engage with this content, cognitive scientists warn against concluding that passionate commitment to certain ideas or in-groups necessarily operates the same way as knowledge formulation. Meanwhile, political science cautions that people insincerely make or accept certain claims as a way to publicly signal their allegiance in a community, even while they probably harbor quiet doubts about the factual veracity of those claims. And so in a real way, we shouldn't conflate deepfakes’ ability to attract attention necessarily with their ability to displace the way that we formulate knowledge, both personally or as a group.

Justin Sherman: I mean, you made several interesting points. I do want to double click just for a minute on the piece you mentioned about your conversations with social media platforms. Is it your sense that––sort of two side questions off of that. One, is it your sense that the platforms had any particular takeaway themselves about deepfakes, and their preparations for them and so on, in 2024?

And then second, do you think that, especially in the current environment––and I say that referring, among other things, to the de-emphasis in the United States on regulation and content moderation––do you think those changes the platforms are making now are going to make it more likely or lower the barrier for manipulative deepfakes, or is it still going to be a lot of satirical kind of content?

Gavin Wilde: I think that's the challenge. I certainly got the sense from most of the folks that I talked to that they were at an institutional level kind of surprised at how much of it was winking, kind of leaning into the aesthetic, the fakery of the art form of a deepfake video, less trying to kind of cast some, quote unquote recording, as evidence of some kind of interference or foul play.

I gathered some sense of surprise at the ratio of that kind of cartoonishness versus deliberate manipulation. I would also certainly not discount that they got better and probably applied a higher level of scrutiny around monitoring and countering the proliferation of synthetic media on their platforms.

But a lot of that, I think, is dependent on an inexact science of detection, which all of us are still struggling with, that there's no good easy button to kind of identify it. And so all of the resources of content moderation––which, as you note, are probably lesser and lesser in the intervening years, just given the political climate––I don't know how much of that they're willing to dedicate towards kind of identifying and pulling down inauthentic content, particularly if they've already kind of concluded that most of it is essentially harmless or playful in its intent rather than trying to be manipulative. But I imagine that's going to prompt them to take a much more case-by-case approach.

Justin Sherman: No, that's––I appreciate you indulging because that's really interesting. And I'm going to mess up the year. So listeners forgive me, but you know, this makes this call to mind, just to say, you mentioned how it was a little overblown and this and that, and there were other ways that content was spread.

I'm just recalling, I forget if it was 2020 or 2022 or 2018, one of those midterm or election years, but there was a study I'm remembering done out of Harvard that had looked at election disinformation and misinformation on social media at the time. And it was, it was not related to deepfakes, but it had in some ways an analogous conclusion, which was there––they had found that a lot of the dis- and misinformation had spread through social media only as a secondary vector. And it was actually, it's no, it's just people talking at press releases and it's the mainstream media saying certain things, you're repeating certain things that then get reposted online. And I always think that's kind of interesting.

But I wanted to come back to something you've mentioned a number of times, which is there's a lot of attention to how believable a deep fake is, the realness, if you will, of a deep fake. But you write in this paper that there's a lot less attention to the processes that audiences undergo when they encounter a deep fake. And so why do you think that is? Why is there so much focus on believability rather than how people actually perceive them, and then what impact does this disproportionate focus have?

Gavin Wilde: Yeah, I mean, a lot of these studies focus on, well, did this deepfake dupe you? Did you think it was realistic? Which is indeed a very interesting question. But, you know, we don't survey folks coming out of an action movie and kind of say, well, how realistic was the action or the gore? Did it convince you that it was real life? Like, no, because the context and the relevance are not the same.

By discounting, you know, those two factors, the relevance and the context, the focus on how real a deepfake “looks” tends to sweep those under the rug. But that's how actual persuasiveness hinges. To be persuaded of something is not just a matter of how realistic it looks in a certain audiovisual context, but everything surrounding that context and how relevant it is to a human being's immediate circumstances, that plays a much larger role.

So this is why you don't have scientists going around and crashing magic shows, or Olympic wrestling coaches kind of protesting outside of WWE headquarters, or music composers writing long treatments about how vaporwave music has perverted musical theory. If we did, the response would be, guys, you're taking this too seriously. You're trying to have an empirical conversation while we're trying to have an experience, or we're trying to examine the aesthetical qualities of these shows. We're all wise to the act and we choose to be here anyway.

And so in this regard, I think rather than thinking of deepfakes as deteriorating knowledge, it's probably far more useful to think of them as enhancing kayfabe or BS, none of which has ever really demanded so much attention and resources from an academic or policy perspective.

And that aligns, by the way, with much of the recent research, as you mentioned, on mis- and disinformation more broadly, that there are very few folks who are simply duped and fall down a rabbit hole of delusion. Most of those who encounter it or dabble in it willingly opted into that arena, much like professional wrestling. And this is not to say that deepfakes or disinformation are not problematic issues, but rather that the cause and effect chain is not as short or direct as I think we've sometimes been given to suppose.

Justin Sherman: Is there anything to be said about the speed and the scale of information dissemination online and the pace of AI––again, as an umbrella term, AI R&D. Is there something to be said about that making deepfake activity qualitatively or quantitatively different from some of the historical examples you mentioned of falsification in fake media?

Gavin Wilde: I think there are definitely useful and beneficial applications for continuing to try to find ways to detect or virtually watermark or ask platforms to kind of voluntarily help us distinguish between synthetic and authentic media that's posted on their apps or platforms. I think all of those are great things. But, you know, again, to borrow the analogy, I think it's––to the extent that we try to develop a detector for pyrite, if we do that while kind of letting, ignoring all of the institutions and the talent and the credentialing that goes into mining and assaying gold, the real thing, we're kind of missing the forest for the trees.

And so I think, as I say in the paper, if over-reliance on audio visual media as evidence has brought us to this place of alarm about deepfakes, doubling down on that bet, that more digital reliance’s the answer, I think is probably misguided. So that's where I would recommend, and do in the paper, lending equal if not greater focus to reinforcing the institutional scaffolding.

That if we're concerned about knowledge generation, well, the social and institutional processes that––the gatekeeping even, if you will––for what constitutes knowledge, we need to reinforce those. In my view, policy-wise, we've in many areas run in the opposite direction. Where again, archiving and library science and copyright, these are the things that I think help us to ascertain. the real from the inauthentic, or the authentic from the manipulated.

So that's the recommendation, is I wouldn't recommend against trying to get better at detecting these digitally. But if that's the preponderance of our focus, I think we're probably missing a key element of the solution space.

Well, this is a perfect segue to filling in that solution space for us. So you write––as you noted in the paper, you write about a misplaced emphasis that goes on, wishful worries, quote unquote, versus actual agonies. To step back, one, how is the threat from deepfakes mischaracterized, in your view, from a legal and policy perspective? And then, two, you just mentioned solutions.

Are there ways where responding well or poorly right now, and what would that different approach to deepfakes look like?

Gavin Wilde: Yeah, so I mean, you mentioned several of these actual agonies at the outset that despite now almost a decade of theorizing about this widespread societal epistemological impact of deepfakes occupying so much oxygen in both our research and policy discussions, the non-consensual imagery of women and children and other vulnerable members of our population, the voice-cloning of not only celebrities, but also scamming, using cloned voices to scam financial institutions.

These are, at present, still pretty much regulated with a patchwork of state statues and common law principles, but we don't have many, much federal- or national-level attention on codifying the rules of the road for this kind of activity that is actually impacting people directly today. It's not in the realm of the speculative, much like a lot of the political manipulation still seems to remain.

So, right now you have 45 states with laws against generated or modified CSAM material. You have a patchwork of state statues on the right of publicity that protects unauthorized use of a person's voice or their likeness. You have a very patchwork approach to copyright infringement and AI-generated textual slop kind of proliferating in the online space for both book publishers and even music producers. Again, these are ongoing and prolific harms that are taking place that are much less sexy issues for research papers and policy focus, but probably much more salient.

And so I think that's where I would focus. Again, if we are worried about the epistemic harms, I would focus much more on the institutional backstops, but more broadly from a legal and policy perspective, these are the actual concerns that probably need the most attention, but don't seem to be getting it, particularly at the federal level.

Justin Sherman: That's all the time we have. Gavin, thanks very much for joining us.

Gavin Wilde: Thanks so much, honor to be here.

Justin Sherman: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.

Look out for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfairmedia.org. The podcast is edited by Jen Patja and our audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from ALIBI Music. As always, thank you for listening.


Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; the scholar in residence at the Electronic Privacy Information Center; and a nonresident senior fellow at the Atlantic Council.
Gavin Wilde is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. He previously served as director for Russia, Baltic, and Caucasus Affairs at the National Security Council from 2018 to 2019, where his focus areas included election security and countering foreign malign influence and disinformation.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare