Cybersecurity & Tech

Lawfare Daily: Daniel Holz on X-Risk and the Doomsday Clock

Kevin Frazier, Daniel Holz, Jen Patja
Monday, December 30, 2024, 9:00 AM
What is the purpose of the Doomsday Clock?

Published by The Lawfare Institute
in Cooperation With
Brookings

Daniel Holz, professor at the University of Chicago in the Departments of Physics, Astronomy & Astrophysics, Chair of the Science and Security Board of the Bulletin of the Atomic Scientists, and the founding director of the Existential Risk Laboratory (XLab), joins Kevin Frazier, Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin and a Tarbell Fellow at Lawfare, to discuss existential risks, the need for greater awareness and study of those risks, and the purpose of the Doomsday Clock operated by the Bulletin of the Atomic Scientists.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Daniel Holz: The fact that these risks are increasing and yet people still seem relatively blasé about them, I think that is particularly terrifying because that makes the chance of kind of bumbling into nuclear disaster so much more likely.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, senior research fellow in the Constitutional Studies Program at the University of Texas at Austin, and a Tarbell Fellow at Lawfare, joined by Daniel Holz, professor at the University of Chicago in the departments of physics, astronomy, and astrophysics, chair of the Science and Security Board of the Bulletin of the Atomic Scientists, and the founding director of the Existential Risk Laboratory, better known as the XLab.

Daniel Holz: In part, these are connected as people see, as countries get impacted and realize, yes, I mean this will impact our quality of life and our security, national security. If we destabilize the globe, that's not good for civilization.

Kevin Frazier: Today we're talking about the purpose and need for study of existential risk and the role of the Doomsday Clock operated by the Bulletin in raising awareness of those very risks.

[Main Podcast]

Okay, Daniel, you wear more hats than a haberdasher. I am just blown away by the number of titles you hold, but I want to focus first on your status, your credential as the founding director of the Existential Risk Lab at the University of Chicago. So, can you give us a sense of what the heck is the Xrisk Lab and how it got started?

Daniel Holz: So, yeah, the existential risk lab was something I started a few years ago, and it came about because I was noticing that, you know, students would come up to me worried about existential risk. They'd be worried mostly about climate. Some of the more informed students were worried about nuclear issues. A lot of them were worried about AI. And they would come to me and ask, you know, how do I learn more? How do I suppose I want to do it, you know, have a career? I suppose if I want to make a difference in this topic, you know, how do I go about it?

And at the time, my answer to them was pretty vague. There is no clear path, I mean, for every individual student, you can try to craft something. But basically it ends up being, well, pick one topic, and here is a very specific way you might get engaged, but there is sort of no general way to say, oh, you're interested in these topics? Here's a way to get informed. And here's a way to learn what research might look like and get trained and, kind of, adjust your career to go have impact.  And so the hope was to create a pathway for that, and the Essential Risk Laboratory was, you know, in part, the goal is to do that. That's part of it.

And the other part is when you are involved in this you realize, I mean, there's so many interesting questions and profound challenges and many of them are profoundly, you know, just intrinsically interdisciplinary, and it's very hard right now in the existing sort of academic structures to do interdisciplinary work on something like existential risk.

So, for example, you could certainly do work on climate. You could go to the geophysics department and become a climate expert. But if you're specifically interested in how climate might have economic or political consequences that gets much more complex. And it's, there's, it's not as clear what the path would be.

And so, part of the point of the Existential Risk Lab is, the XLab is, to address that as well to build a kind of academic community, focused across all these disciplines, and really broadly across. And when I say that, I mean, I want to include philosophers and I want to include artists, and even economists and lawyers, as well as…

Kevin Frazier: Even lawyers! I just have to say that, you know, it was such an honor and privilege to get to teach some of your fellows about emergency powers. So, for all those lawyers listening, yes, the X-risk Lab, the XLab, even, is willing to consider lawyers as a worthy discipline.

Daniel Holz: Yeah, well, you know, thank you again. I hope we'll get a chance to talk about, but thank you again for doing that. And that was one of the best sessions. The students are still talking about it.

Kevin Frazier: Oh, my gosh, they're probably just wondering why my jokes were so bad, but, with that in mind, I want to walk through two things in particular. First, can you help listeners understand what qualifies as an x-risk and what doesn't? You and I had a wonderful conversation about black holes. And I think when some people hear existential risk, they immediately go to these end-of-the-entire-world scenarios where the earth gets swallowed up by a black hole, or where we all detonate nukes and everyone's gone the next day. What does it mean and what qualifies as an existential risk?

Daniel Holz: Yeah. And this is one of those questions that depends a lot on whom you ask, and we try to stay somewhat open-minded about the question of existential risk. You know, as you mentioned, so I, my day job is to study black holes: I'm an astrophysicist.

And a black hole wandering nearby is certainly, you know, a risk to civilization. It's, when you run the odds of that happening, they're very, very low. And so that's not something that we spend a lot of time thinking about. And that kind of encapsulates, you know, the approach, which is we're worried about things that might happen that will impact civilization in a, you know, profound way.

And by that, we mean, you know, maybe hundreds of millions of people will perish. The way society is structured right now, kind of falls apart and perhaps regresses in certain ways, all the way to all of humanity being extinguished. And so that's a broad range.

And some things like a nuclear exchange of the major powers would certainly qualify. But there are many other things you can worry about, including, you know, unchecked climate change will certainly destabilize a civilization as we know it, and may lead to increasing conflict, which in turn may lead to nuclear exchange. And topics, you know, of course, there's a lot of discussion of AI as a potential risk, and some of the scenarios there are very troubling. So there's kind of a range.

What we generally do is try to get some sense of how catastrophic a given risk would be, and how likely. And then there's always this third question of what can you do about it? And we were really trying to do all three as part of XLab.

Kevin Frazier: That's certainly a tall order for everyone involved.

And before we move on, just to press on that a little bit, there's a critique that's come about mainly from AI considerations of x-risk. I think there are a lot of people who are in the quote unquote AI safety community who have raised a lot of x-risk concerns. And one popular critique has been, hey, you know, it's great that you're concerned about these low probability, but high consequence events. What we need to be focused on is the immediate concerns posed by nukes, posed by AI, posed by biological weapons, what have you.

What's your response to those people who say, why the heck are we investing in the study of these low probability events, if we have so many things that are, for lack of a better phrase, low hanging fruit, that we should instead be focused more on?

Daniel Holz: Yeah, so first, it's definitely not a case where we can only pick one, and you have to have your one favorite disaster, and then we should all only work on that. There are a range of issues, and people will have different priorities, and that's fine. I mean, we're trying to understand better what the risks really are.

I think with AI specifically, it is true that there's a range of outcomes here. I mean, people argue about what the threats really are, but one of the things that people argue about a lot are the timescales. And yes, some people say, oh, this could be decades or a century off. But you can certainly find experts that think that some of the more worrisome aspects of AI are around the corner, a few years.

And these are not, you know, sort of cranks, fringe, you know, scientists off on the edge. These are very mainstream scientists who are very familiar with the technology. And they're very concerned on a short time scale. And I think AI in particular, one of the things that characterizes it, is people really, people are having trouble, kind of, characterizing how quickly things develop.

I think it's fair to say many people in the field have been, if not most people that I've talked to, have been surprised with the sudden increase in capability of systems over the last few years. I mean, people have been working on this for decades. Progress was slow. There was an expectation that at some point there'd be some breakthroughs. But the sort of quality of the breakthrough and the speed with which it's happened, you know, has taken quite a few people by surprise.

Now people are trying to extrapolate what is, what will the state of technology be in two years or five years or 20 years. And that is a very difficult, you know, activity. And if you did that five 10 or 20 years ago, you'd be probably completely wrong. I think people there both had very short and very long timescales. A lot of people thought accomplishing what has already been accomplished was extraordinarily unlikely. I think even if you’d taken a poll just a few years before, you know, the kind of ChatGPT moment, a lot of people would have said that was maybe decades off.

So I, at least from the XLab perspective, we're very hesitant to just say this is not an urgent concern. It has a different feel to it than say a nuclear exchange. And I do think the nuclear risk is incredibly urgent and perhaps even more relevant. The nuclear risk is underappreciated, I would claim at this moment, while the AI risk, I think, is appreciated. People are talking about it a lot more at all levels.

I'm not sure it', you know, there's obviously a range of opinions, but the fact that there's discussion and various organizations are trying to engage with the risk and maybe control the risk that there might be regulation, I think that's all an encouraging sign. On the nuclear side, almost all the signs are discouraging and very worrisome. And so, you know, that's one of these things where it's very clear that's an area where you want to have increased focus.

Kevin Frazier: Can you share a little bit more about why you think concerns about nuclear risk perhaps have, kind of, gotten off the mainstream agenda?

I know that, if I talk among my friends, for example, a lot of millennials, a lot of even especially Gen Z-ers, they didn't live at all through the Cold War. The idea of being a, dropping and covering, of listening to drills about how we should hide under our desks or knowing where the nearest bomb shelter was. That's something that, you know, may show up in popular culture every once in a while, but it was never a lived experience that forever made the idea of a mushroom cloud something we had to fear.

Do you think it's just this time factor that an increasing number of folks didn't grow up in that nuclear era where that just seems like a bygone threat that we don't need to be concerned about anymore?

Daniel Holz: Yeah, so I think that captures a big part of it, is that if you didn't grow up during the Cold War and you weren't worried every day about, you know, whether you make it through the day or whether you wake up the next morning, I think that, you know, does change one's perspective. But I think also part of it is this perception that we quote, you know, won the Cold War.

The Cold War is in the past and the threat has receded. And there's this sense, you know, even among people that are quite informed that, you know, quote, the Cold War was a success. Yeah, we didn't blow ourselves up. So deterrence works. And now we're through that. And we made it through the most dangerous period in history, and now it's all smooth sailing.

And I think it's that, it's not just, kind of, a lack of awareness and a kind of lack of the visceral fear of nuclear exchange, but an overconfidence in our ability to address these threats. And I think especially now, as the threats are increasing and multiplying, and arguably the nuclear risk is greater now than perhaps ever, or certainly comparable with, you know, for example, during the Cuban Missile Crisis, the fact that these risks are increasing, and yet people still seem relatively blasé about them, I think that is particularly terrifying. Because that makes the chance of kind of bumbling into nuclear disaster so much more likely. And it's that kind of miscalculation or just, you know, mistake that really, really keeps me up at night.

Kevin Frazier: Well, I apologize because I'm sure you, you need your beauty sleep, Daniel, but at least, you know, when you wake up that you're doing work to try to meet this talent pipeline. So, can you paint a little bit more of a vision of why it's so important that we train students today to be aware of these risks and to be aware of different strategies to try to mitigate those risks.

So, right now we have this, you all ran a successful fellowship program. I believe that was your second fellowship. What's the kind of pipeline you're developing? What does that look like? And if you were to make a case to a benevolent dictator with a lot of money about why this is a national priority, why we should be developing this pipeline, what would that case look like?

Daniel Holz: Yeah, so that, that's a great question. And it is, I mean, I must say I, as you indicated, I am doing this in part because I feel like, you know, I have to do something and this is something that I believe is valuable, and the nature of XLab, the lab part is we're trying many different things and just trying to have impact in any way we can, and one of the most straightforward ways to have impact.

And I think, you know, so I'm a faculty at the University of Chicago. And there are a lot of really amazing students and the students are so capable and they have so much energy and so much drive and they really want to go change the world. And they want to know how to do it. They're desperate for the tools. They're desperate for the information.

And so I feel like part of the, you know, work here is to just enable them and then get out of the way. And that's been quite satisfying. And as you mentioned, we have these student research fellowships that we run over the summer. And we also have some working groups that run during the year, and we just are trying to create all these ways for students to engage on topics of interest to them.

And so, for the research fellowships, a lot of it, the students, we don't sort of assign students projects. We don't kind of come up with a list at the beginning and say that we need to solve these. We work with the students, given their backgrounds, given their interests, to find a way that they can contribute. And so, you end up with a very wide range of projects, we find the mentors, mostly faculty members, and we let them kind of do a deep dive on a topic and produce original research.

And then what we found is, so the students, for the most part, seem to find that the experience extraordinarily positive, we've gotten extremely good feedback and many of them carry what they get from the summer onwards and they go into, to whatever it is they'll do next is kind of inflected by their experience.

And so that could be going into an AI safety lab or going to graduate school, you know, working on, you know, becoming a political scientist or a, you know, a theorist trying to reduce risk, going into a kind of biological lab to, you know, prevent pandemics by having a faster development of tests, you know. There's a, there's just a whole range of ways that people get engaged.

Our goal is to kind of help them find, take their passions and help them kind of turn those, their passions and their interests and kind of turn them towards the good, which is, you know, something they're eager to do. And we kind of help them accomplish that.

Kevin Frazier: So as you create these disciples of what I will jokingly call doomsday studies and you send them out into the world. They have this fellowship experience, they go back to their home institutions, what sort of reception have you received? Are you fielding calls from other universities saying, hey Professor Holz, how do we start an XLab at our own institution? Have you heard from anyone in the state government or the federal government saying, hey, how can we help out? What's been the reception so far?

Daniel Holz: So, yeah, the reception has been very positive in the sense that people, when they hear about what we're doing, you know, are very encouraging, very supportive. For example, you know, you gladly came and did this workshop. People want to help us succeed, and that's been, you know, immensely gratifying.

The kind of formalization of this my, my goal over the long term is to, in some sense, create a new discipline of existential risk studies. A real interdisciplinary department, where people from all different backgrounds across the university can meet and talk about these things and address them. And that's a kind of longer term goal. It's not easy to create a kind of new academic discipline from whole cloth. You don't just flip a switch.

And so, that process, although again, I've gotten, received support across the university, across the University of Chicago. But also more broadly, people offer me their syllabi and are happy to come and guest teach and guest lecture. And in that sense, the support has really been wonderful. I mean, I think basically anyone that I've contacted has wanted to help in any way they could.

But turning that into a formal program is still something for the future. It's something I'm very eager to do, but both creating the academic environment, you need a full environment, you need another department where this has to be a whole community of people doing this, it can't just be a University of Chicago.

And there is a community growing there. There's an effort at Cambridge University. There's an effort at Stanford. There's an effort at MIT. And, you know, that really are kind of focused on these interdisciplinary questions of existential risk. And those, a number of other institutions are talking about it. So the hope is eventually this becomes a full fledged discipline. And many institutions have these sorts of institutes or centers focused on existential risk, you know, broadly conceived and so students all over the world are being informed about this.

In terms of future employment, again within academia, it's hard, until there are Departments of Existential Risk, you don't really get a degree, would like to change that, would like to at least maybe have a master's program, would like to have an undergrad, at least a minor in existential risk, but right now this is early days.

And so, students will generally focus on one particular issue as part of their work, say within XLab, and then they might go to graduate school to double down in that direction. And that's certainly productive. And then, as you said, I mean, certainly students will go on, might work, go to the State Department, might go to you know, various agencies and their backgrounds are helpful.

And what we hear from people, for example, at the State Department, is that there is a dire need for students with kind of, a broad background, are interested in making a difference along these lines. And that has declined over the decades. And in part, as you said, during the Cold War, I think there was much more of an awareness that this was critical work, and, you know, sort of the best and the brightest wanted to help.

And now, there's a general feeling that this is not where the best and the brightest are focusing their efforts, and that does not bode well for the long term, you know, viability of civilization. If the best and the brightest are just interested in, you know, going to Wall Street and maximizing return that, you know, that's fine. I mean, certainly some people should, you know, do that, but if everyone's doing that and no one's thinking about these sort of greater threats to civilization itself, that could be very worrisome.

Kevin Frazier: Well, I'm pleased to hear that folks are responding to your inquiries and count me among those who will continue to respond.

I think there's a really interesting point here too, which is to say that these sorts of risks and this sort of preparedness mentality also seems to have struggled or at least fallen a little bit down the agenda at Capitol Hill. We have folks like Marshall Kosloff, and Steve Teles talking about how climate change is, for example, kind of, the worst-case political bargain, because you're asking people to suffer immediate consequences, immediate costs, for gains that may be realized in the future.

And that's just a political loser as an issue. And we see that across x-risks. So, how do you think that sort of focus and embrace and willingness to prioritize x-risk, how can that climb back up the political agenda? Relatedly, how do you think that efforts such as the Doomsday Clock itself, in terms of raising awareness of these risks, is really important to getting this back to the fore of the political agenda?

Daniel Holz: Okay, so Kevin, I think both of those, you know, this is the key question, which is, at the end of the day, a lot of what needs to be done is political action. And I think climate is a great example of this. Climate change is happening, and we know why. You know, last year was the hottest on record. There are lots of troubling signs.

I think you can, at the very least, say that many of the predictions of what might happen are now coming to pass. In some cases, there are actually things are worse than many climate scientists expected, at least over the past year. There are lots of troubling signs. So there's some amount of alarm within the climate community.

So, this is definitely a situation where people are very worried, have been for a while. And you could say, well, you know, the short term interest in just not doing anything always wins, and it's just too difficult to get political action. And I think at first pass, that's absolutely true. It is very difficult to get action to something, which, as you said, is kind of long-term and, you know, harder to see. But I think a bunch of things have happened that kind of have shifted that discussion.

So, for example, one is that climate change is clearly happening at this point. And I think almost everyone, if you don't feel it directly in your own personal experience, every time you pick up a newspaper and read about record-breaking floods or droughts or wildfires, hurricanes, storms, flooding, it's just nonstop all over the world. And so I think at this point there is this growing awareness: oh, yeah, this is bad, and the scientists maybe they were right, maybe they actually knew they were talking about.

 I think there's still not a full understanding of just how catastrophic it's going to become. That right now the Earth is kind of the most hospitable it will be for us, for our lives. It's only going to get worse. This is the best it's going to be. And I think that is problematic because, again, what we're trying to do is prevent much more serious disasters 10 years from now, or 20 years from now, or 50 years from now.

And the actions we're taking now aren't going to fix what's happening now. That's already baked in and finished. What we're really trying to do is try to prevent it from becoming much, much worse and trying to reduce misery decades from now. So it is exactly what you're talking about, but there's now an awareness. So I think that's positive.

I also think there's an awareness at levels of government you know, within the UN, there's just a lot more global discussion and there is some amount of action. In part, these are connected as people see as countries get impacted and realize. Yes. I mean, this will impact our quality of life. And our security, national security, if we destabilize the globe, that's not good for civilization. As people realize, nations realize, it's in their own self-interest to address climate change, we'll hopefully see more action.

But then finally, probably the most encouraging thing is at this point, in particular with climate, short term interests would say we should go renewable. At this point, the price of solar has dropped remarkably and it's in many places less costly than going with, you know, fossil fuels with, you know, extraction, especially when you consider additional issues like pollution and, you know, life expectancy, and all these other, you know, downstream effects, you end up favoring renewables dramatically.

And so at this point, you could argue that a lot of what's in the way is if we really had only our short-term self interest in mind, we would still go 100 percent renewable and do all the things we're supposed to do. Because even in the short term, it's less costly, it's clearly better you know, from a nation state perspective, and so that's what should happen.

So then the obvious question, which I'm sure you're about to ask, which is, why aren't we doing that? And then that gets much more complicated, and that has to do with special interests. And, you know, at least in the States, it gets bound in with, you know, politics, and it just, that's a kind of much tougher nut to crack.

Kevin Frazier: Continuing on the balance of pessimism, optimism, and awareness, generally, I am keen to talk about another one of your hats, which is your role as chair of the Science and Security Board of the Bulletin of the Atomic Scientists and your participation in setting the Doomsday Clock.

So we're sitting just 90 seconds away from midnight. And we saw this Doomsday Clock get formed at the height of the Cold War, asking about just how close we are to that sort of doomsday situation. And in the subsequent decades, we've seen the clock approach midnight, move away from midnight.

And I think listeners would love to know more about what that process of setting the Doomsday Clock is, is like, and what the sort of goal is in terms of adjusting that clock. And whether that's intended to provoke maybe a little bit of healthy panic or whether it's, maybe if we move back, does that lead to problematic complacency? What's the goal there in terms of setting that clock?

Daniel Holz: Yeah. Okay, great, yeah. So, as you mentioned I'm chair of the board that sets the clock and I should say, so the Bulletin of the Atomic Scientists was founded in 1945 at the University of Chicago by scientists involved in the Manhattan Project.

And the scientists, you know, had developed, you know, these terrible weapons and were freaked out. And they, even in 1945, they sort of foresaw that these weapons would get much stronger. Even then they had a sense that we would go from fission bombs to fusion bombs. So, sort of atomic bombs to these hydrogen bombs, which are, you know, orders of thousands of times more powerful.

So, you know, people have in mind, you know, kind of Hiroshima or Nagasaki. That is not what we're talking about at all. We're talking about easily 100 or 1000 times more powerful than that. That's where things were headed, that at the time when the bulletin was founded, only the U.S. had this technology, and there were only a few bombs.

You know, scientists realized that, of course, other nations would be able to figure out how to make these weapons. Once you know it's possible, people will, other nations will make them, and they'll get more, there'll be more and more of them, there'll be more and more powerful, and unless something shifts, we'll end up blowing ourselves up.

And that was all clear in 1945, before any of this stuff really happened. Before there was a Cold War, before hydrogen bombs, before all of it, the scientists, kind of, could read the tea leaves, could, kind of, knew the capability of the technology, and were really concerned. And so they formed this bulletin to warn the public and policy makers, kind of inform the public and policy makers, but also warn them of some of the threats, some of the issues and specifically issues that could threaten civilization entire.

And so, that was explicitly, in some sense, it was the first organization focused on existential risk in that way. And I think what's sobering is you read a lot of those initial documents and a lot of the initial statements and everything they were worried about came to pass. Except the very last item on their list, which was World War III and we annihilate ourselves.

That hasn't come to pass yet. But everything else that they anticipated has. And I think that's instructive, as we now hear climate science experts worried about tipping points and potentially really catastrophic outcomes. Or experts in AI, really the experts working on the technology, some of them also similarly freaked out and mourning about catastrophic outcomes.

This is exactly what the Bulletin is about, is to help convey these concerns to the public and to policymakers. And so the Doomsday Clock is kind of one, I mean, there's also a journal. There are many things that the Bulletin does, but the Doomsday Clock is probably the best known, and the point of the clock is to kind of assess the state of civilization in terms of these sorts of catastrophic risks.

And so what we do is we meet a couple times a year at a minimum, and we ask ourselves how are things going? Are things getting better or worse? In the last year, have these risks gotten better or worse? And we focus on a number of things. We focus on nuclear risk, but also climate. We focus on bio. We focus on what we call disruptive technologies, which include AI, but also a range of, you know, other technological advances. For example, advances in, you know, space warfare, or hypersonics, or there's kind of a range of technologies that are developing. We talk a lot about misinformation and disinformation.

So in all these subjects, we kind of bring together experts and we ask, how are things going? You know, are the, you know, are we making progress? Are things getting worse over the last year? Where are we over the kind of history of the clock since 1945? Where are we? And then we set the clock.

Kevin Frazier: Easy task, easy agenda item. Everyone's in a good mood. Pass the cookies. Where's the coffee? Just small questions.

Daniel Holz: Yes, exactly. So you can imagine, you know, so I now, you know, chair this group and the goal is consensus. And, you know, people are a little tense and it's a pretty serious discussion and people get very passionate. It's, you know, we're talking about, it's not completely abstract.

And I think this is, you know, a problem for the community overall at all levels, you know, for the politicians, the scientists, the public. There's a tendency to, kind of, put things away in a box. Yeah, there's a nuclear risk, but okay. It's fine, you know, we can toss around numbers like, oh, maybe a few hundred million people would die in a nuclear exchange. But, you know, okay, we just kind of talk about, write some numbers on the board and then move on.

But, you know, we really are talking about our lives, our civilization, our planet, and what we're discussing could very well happen. A nuclear exchange could happen, could be happening right now. It takes 30 minutes to pretty much end civilization. That's still true. It's, if anything, more likely right now than it has been for many decades.

We spend a lot of time thinking about exactly all the ways that could happen right now. And there are many and I cannot tell you oh, yeah, it's all under control. It's all gonna be fine. Don't worry. There are many adults who have got it all figured out and it's all fine. Like, no one who really studies this stuff feels like it's all going to be fine. Everyone is terrified.

Kevin Frazier: With that note and knowing that I certainly won't sleep as well tonight, I think we're gonna have to end it there. But thank you so much for coming on Daniel.

Daniel Holz: Thanks so much.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Daniel Holz is a professor at the University of Chicago in the Departments of Physics, Astronomy & Astrophysics, Chair of the Science and Security Board of the Bulletin of the Atomic Scientists, and the founding director of the Existential Risk Laboratory.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare