Lawfare Daily: The Double Black Box: Ashley Deeks on National Security AI

Published by The Lawfare Institute
in Cooperation With
Lawfare Senior Editor Alan Rozenshtein sits down with Ashley Deeks, the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia School of Law, to discuss her new book, “The Double Black Box: National Security, Artificial Intelligence, and the Struggle for Democratic Accountability.” They talk about the core metaphor of the book: the idea that the use of artificial intelligence in the national security space creates a "double black box." The first box is the traditional secrecy surrounding national security activities, and the second, inner box is the inscrutable nature of AI systems themselves, whose decision-making processes can be opaque even to their creators.
They also discuss how this double black box challenges traditional checks on executive power, including from Congress, the courts, and actors within the executive branch itself. They explore some of Deeks's proposals to pierce these boxes, the ongoing debate about whether AI can be coded to be more lawful than human decision-makers, and why the international regulation of national security AI is more likely to resemble the fraught world of cyber norms than the more structured regime of nuclear arms control.
Mentioned in this episode:
- "National Security AI and the Hurdles to International Regulation" by Ashley Deeks on Lawfare
- "Frictionless Government and Foreign Relations" by Kristen Eichensehr and Ashley Deeks in the Virginia Law Review
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Ashley Deeks: If you're in the Justice Department and you're working on a case that involves classified information about some nuclear secrets, you will be cleared for that type of information, but not about some counter-terrorism activity that's happening around the world. So, so it's absolutely right that different agencies and different actors inside those agencies experience the national security black box quite differently.
Alan Rozenshtein: It is the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota, and a senior editor at Lawfare. I'm joined by Ashley Deeks, the Class of 1948 Professor of Scholarly Research and Law at the University of Virginia.
Ashley Deeks: It just goes to argue that it will be very hard, based on my current understanding of, of how these systems are built to, to get us to a place where we could, you know, use an autonomous system confidently or confident that it complied with the laws of armed conflict. But we, we should never say never.
Alan Rozenshtein: Today we're discussing her new book, The Double Black Box, which tackles the challenge of democratic accountability when opaque artificial intelligence systems are used inside the already secret world of national security.
[Main Podcast]
So, before we get into talking about the, the double black box and what it is, I wanna first just start by laying the groundwork for the sorts of AI national security systems we're talking about. The book touches on several high-risk scenarios, lethal autonomous weapons, AI driven cyber operations, all of which could escalate intentionally or otherwise.
And so of all the potential applications of AI in the national security space, which one, or you can pick two or three if you don't have a favorite one, which one worries you the most and why?
Ashley Deeks: So, I think of high-risk national security AI as primarily AI tools that can inflict lethal harm on people or lead to their detention. I tend to think of those as among the most significant, but of course there are also concerns in, for example, the, the cyber context where the initial activity is not lethal or does not result in detention, but does create, effectively a flash crash that can lead to conflict, which ultimately would produce casualties but does not itself create those casualties. So those are the, some of the, the, the concepts I have in mind.
As your question suggests though, there are lots of places in which I think the national security community is going to ultimately adopt AI that are not just in those buckets. So, the intelligence community using it for all sorts of analysis, for counterintelligence activities, Homeland Security using it, and so on.
So a lot of the oxygen in the room does get sucked up with the lethal autonomous weapons systems conversation, and that's an important one, but I wanted to think a little bit more broadly about other sets of tools in the national security space.
Alan Rozenshtein: You used as one of your case studies the idea of AI enabled cyber tools, and I'm curious why you chose that. Is it because that's where the systems are most developed or because it's just an example you can imagine the autonomous nature of them being most effective? I'm just curious why you chose that one as your case study.
Ashley Deeks: I guess I'd say maybe two reasons. The first is based on my research it did seem as though that was the place where we might see this earliest adoption of, of basically autonomous AI tools out in the wild, maybe used defensively, maybe used offensively. And so that seemed like a realistic place to, to start to do a case study.
A second reason is that I think it's pretty, it's a pretty useful way to demonstrate how Congress can quickly fall out of the picture. Even in a, a place in which we think under the Constitution, Congress should have a significant role in deciding when to resort to force. I thought that the, some of the things that I think would play out in a cyber autonomy context would show how quickly Congress could lose control of its role.
Alan Rozenshtein: One point you make, and one of the assumptions that you are explicit about in the book is that what's sometimes called artificial general intelligence, which is this idea that perhaps one day AI systems will be, and definitions differ, but kinda a rough definition is that an AI system will be as good as kind of an above average human at all tasks, or at least all tasks that don't involve, you know, opposable thumbs though perhaps with robotics.
We'll get there, uh, pretty quickly and you, you make an assumption that AGI is not imminent. And so I'm curious both why you make that assumption and then also what happens if that assumption is incorrect. Because at the very least, my reading of, or at least my following of industries, suggests that it's honestly a 50/50 of whether AGI is coming in the next few years, and even some of the more skeptical folks think that within 10 years we'll have some, something pretty, pretty close to it, at least functionally.
Ashley Deeks: Yeah, it's a good question and it's one that our friend Jack Goldsmith also asked me when I, when I had him read an earlier draft So, first of all, I, I'm not a super forecaster and I'm not an employee of a high tech company working on AI. So I didn't wanna get too far out in front on trying to predict what a world looks like in which either the U.S. or China or another country can achieves AI. As, as you your question suggests, there are really smart people that come out on both sides of this debate.
So, so let's assume that I shouldn't have assumed that. Let's assume that we do achieve AGI. I guess the question is, is that still a double black box problem? Are there still things in this book that I think would help inform the problem and maybe some of the solutions? And I think the answer to that is probably yes.
I wanna do more thinking about it, but to me it still sounds like there are parts of, you know, the use of AGI in national security that produce the, a very, a, a similar and, and worse, I would say a, a, a more magnified problem if we're talking about, for example, full autonomy in armed conflict, with the humans kind of doing very little to execute the, the strategies, the tactics and so on.
Maybe you also are talking about government systems that are very capable of, of accurately forecasting foreign policy developments. Systems that are engaging in extensive and very competent collection and analysis of intelligence. It might become clear quickly to the public that some government has achieved AGI in some settings, right? That would make it less of a national security black box, but they probably won't know precisely what those systems can do, assuming that the government wants to keep part of that classified and all of the AI black box piece of things, those issues worsen.
So I think, right, we still have a situation in which Congress will be ill suited, at least on current trends to understand the tech. They're too under-resourced, too politically fragmented to easily regulate what has emerged. The executive agencies that are controlling the AGI are hugely empowered at the expense of non-technology agencies. Courts are really outta the loop. And, and we can talk about this more, but I do think some of my prescriptions, would still be pretty highly relevant, at least as I'm envisioning AGI, where you have allies who can still play an important checking role.
Just because you have an AGI system doesn't mean you should be using it everywhere. I do think Congress would need to pull up its boots and impose framework regulations. I think corporate whistleblowers who are inside the companies that are helping develop this AGI could help flag abuses and so on.
And the general public, having learned that we've entered this AGI world will need to vote with their feet and their voices. So, I think a magnified problem. There may be other things though that I haven't fully thought through that would, that would change it and change my analysis in more significant ways.
Alan Rozenshtein: For what it's worth, and, and the reason that part of the book jumped out at me is because if anything, I would think that the closer we get to AGI and then artificial super intelligence, which, which comes later, the more relevant your analysis becomes on both of those dimensions.
Ashley Deeks: Yeah, just so, so I think one reason why I originally made that assumption is because I, I start out by trying to recognize that there are pressures on two, on two sides here, right?
There are pressures in wanting to make sure that our government is acting in a coherent, lawful way, and there are pressures as other countries are developing these systems for us to do more and to do more faster. And so I do feel like we're kind of at an equilibrium now in, in kind of a narrow AI world, but that those pressures will potentially, that that balance will change if one of our adversaries achieves AGI.
So that was, I think, why I built that assumption in without trying to kind of preordain how the, the diagnoses and the solutions would, would attach to an AGI world.
Alan Rozenshtein: And I wanna come back to that question of, of external pressure from an adversary, which in almost any scenario would be China later in this conversation. But we've done a bunch of sort of table setting on the technology here. I just wanna then now go to, to your analysis of it and to start then with the, the central metaphor of the book, the, the double black box.
So you've mentioned it a little bit already, but just to be very explicit, what are the, the two boxes, why are they black and why is one inside the other?
Ashley Deeks: Okay, so the, the first black box is the national security black box. I think as many listeners will know, the government today, our government, and other democracies make a lot of their national security decisions in secret. They do that because they have to use sensitive intelligence, sensitive technologies, and our governments have to keep those tools secret from adversaries, which also means they have to keep its secret from its own public, right?
And so this makes it more difficult to oversee the executive than it does in areas where the government is not operating behind the veil of classification. So I think it's fair to say that our national security agencies largely operate inside a national security black box, where there are lots of things happening and it is hard for those outside the box to have a sense of, of what's happening, why it's happening, who's doing what.
Alan Rozenshtein: And also, I mean, just to kind of deepen that further, it, it's, it's hard to see from outside the box. But it's also hard to see within the box as well. I, I mean, we were both in our government lives, national security lawyers or adjacent lawyers, and I was in that box and I, I, I had no idea what was happening, you know, at the proverbial two feet in front of me, it was very dark.
Ashley Deeks: Yeah.
Alan Rozenshtein: Is, is that, is that a fair, fair description? I, I wanna see how much I can squeeze out of this metaphor.
Ashley Deeks: Yeah. So I do think, and I, I say in the, in the book that, that, people will experience the double, the, the national security black box, we'll stick with that for a sec, at different levels of opacity. And so they will also experience the overall double black box in different, different levels as well. But you're right, so there are some people inside the government who are super users who have access to anything they, they wanna see because of their levels of seniority or the, the type of ability they have or the, the, the assignments they have or so on.
Not many people in the government have that, it's a kind of need to know basis. So if you're in the Justice Department and you're working on a case that involves classified information about some nuclear secrets, you will be cleared for that type of information, but not about some counter-terrorism activity that's happening around the world.
So, so it's absolutely right that different agencies and different actors inside those agencies experience the national security black box quite differently.
Alan Rozenshtein: Okay, so that's box number one.
Ashley Deeks: That's box number one. And I will just say in passing, the reason why people have written about government secrecy for years is because the fact of the government doing things in secret challenges what I call and others have called our public law values.
In other words, it is harder for us to tell when the government is, is operating in secret, whether it is doing things lawfully, whether it is being effective and efficient about what it's doing. It is harder to require officials to justify their decisions because there are fewer people to challenge them, and it is sometimes harder to hold them accountable for the decisions they've made, partly because the public may not know about it.
Alan Rozenshtein: Well, let's, let's stay on those public law values actually for a second.
Ashley Deeks: Okay.
Alan Rozenshtein: So those are all we might call sort of procedural values and, and you know, we're both lawyers, which is another way of saying we're weird robots who are obsessed with procedure. I feel like that's the, the kind of good definition for, for, uh, you know, it's humans that have been turned into procedural loving robots,
Ashley Deeks: Except I don't teach civil procedure.
Alan Rozenshtein: Well, we, I, I think we all, we we're all procedure teachers, whether we teach civil procedure or not.
Ashley Deeks: Yes.
Alan Rozenshtein: Why focus on procedural values like this? Why, why not substantive values?
Ashley Deeks: Yeah. So I am trying to write a book that has some traction in reality
Alan Rozenshtein: That’s so, so refreshing from a, from an academic,
Ashley Deeks: And I may have failed, but, but I, but I tried and I took the view that it was more likely in the era of political polarization that we're in today to find common ground among people who differ about what the government should be doing, maybe agreement around how they should be doing it. So I like to think that these values are relatively neutral in that whether you are on the left, right center of the political spectrum, you want our government to be acting lawfully, you want it to be acting efficiently and you wanna be able to hold your officials accountable for the choices that they've made.
I felt like there could, you could achieve more consensus around that than around a debate about whether the government should develop, a facial recognition software tool that it was going to use against, you know, some group overseas who may be freedom fighters, maybe terrorists or whether we should be interfering in foreign government elections using AI tools; that the substantive fights are hard and real, but maybe we could achieve more consensus if we focused on these, as you say, more procedural values.
Alan Rozenshtein: So just to make the point explicit. We may not all agree on whether we should build Skynet, but we should all agree that if we're going to build Skynet, it should be through some reasonably democratic, reasonably transparent process in which the people doing it have some idea of what they're doing.
Ashley Deeks: Correct. That's fair.
Alan Rozenshtein: Excellent. Okay, so, so, box one. Okay. Let's talk about box two.
Ashley Deeks: Okay. So box two is the AI black box. And again, I did not come up with this term, I think it has been around for a while, especially as the machine learning and AI tools have become more and more sophisticated. The idea is that they are basically using neural nets to develop predictions, identify things, make recommendations.
And they're doing it in a way that is hard for even the computer scientists who have designed the systems to understand how it is that the systems are reaching their conclusions. So there's data in, there's training in, there's, something comes out the other side and almost no one understand, it's almost impossible to understand, what has happened in the middle.
So that is a kind of paradigmatic black box. So, inside the national security black box, we are dropping a series of AI black boxes that are going to inform, advise, and operationalize national security activity.
Alan Rozenshtein: So I wanna stay on this point for a bit because I think it's important to kind of understand this maybe some at a somewhat technical level, what's, what's going on.
So here, here's my understanding, correct me if I'm, if I'm wrong. The way that these machine learning systems in particular operate and machine learning being one subtype of artificial intelligence, it's that you have a bunch of basically parameters that are taking input and turning 'em into output.
But the way this is trained, you can't look at any particular parameter, any particular, you know, quote unquote neuron inside and say that's the neuron that, you know, lights up when the machine thinks that this target in the drone footage is a terrorist. And so, you know, I can look at the after action and say, well, that neuron lit up, so therefore I can say, this is why the machine behaved that way, right? Rather, the information sort of diffuse, around that, right?
Ashley Deeks: Yep.
Alan Rozenshtein: That, that, that is, and that has been one of the, the central challenges in machine learning. There is, and you do talk, talk about this in the book, a, a subfield of machine learning pursued by some labs more than others, I think Anthropic in particular has been really focused on this, on what's called interpretability or explainability. And this is the idea of figuring out right through a bunch of, you know, clever computer science and mathematical and statistical techniques, why a machine acted the the way, the way it did.
So, you know, is, is your read of this that it's just never gonna be enough? Or it might work, but it might not work? I, I'm curious how much of your account hinges on explainability not being solved. It'll probably never be solved, but reasonably well dealt with in the next few years.
Ashley Deeks: It is, I would say, an important piece of the book, but I think there are enough other points in play that even if we improve, if we make the AI black box somewhat less opaque, that's all for the good. But I think a lot of the concerns described in the book do still survive.
As I understand it, I mean, you're absolutely right that explainable AI is one helpful way to try to narrow the size of the, the AI black box that's inside this double black box. People have been working on this for years and years and have taken a, a range of approaches. My understanding is the more you make a system explainable the less effective it is. And so there's some tension between wanting to improve its explainability or interpret, interpretability, and improving its capacity. And my sense, again, not as a computer scientist, is that as we are trying to shift to Meta's super intelligent systems, it's gonna be even harder to really produce quality explainable AI.
I, the other reason why I, I think it's gonna persist as a problem for a while is the system can produce explanations that may just simply be made up. They may kind of make intuitive sense to a human, but it's still very hard for us to tell if that's the real basis, the real reason why the system recommended X instead of Y or thought it was a cat instead of a dog.
Alan Rozenshtein: Which is true for what it's worth of humans as well, right? I mean, the, the amount of explanations and ad hoc rationalizations that are false, and perhaps even worse, not just false, but are unknowingly false to the person making them is, is also quite, quite large.
Ashley Deeks: Yeah, it's true, it's true. I mean, this is always, this is a, a sort of persistent question of, you know, are computer systems actually less transparent than humans?
But you, you know, you can bring the secretary of the treasury up in front of Congress and ask a bunch of questions and ask a bunch more questions if you feel like the answers aren't reasonable, and try to kind of triangulate to, to a, a truth in a way that it might be harder to do with a Skynet system in the witness box. Maybe.
Alan Rozenshtein: Maybe.
Okay, so, so we have, we have the, the, the matryoshka doll of boxes here. So let's talk about why that's a problem for the public law values that you've articulated. And so, I, I wanna kind of go through the different traditional checks that we have in the national security space, you know, Congress, the courts within the executive branch, you know, the, the private actors and international actors.
And just go through and sort of have you identify, what is the problem? And then, you know, what, what are some of your proposals? Because I think one of the thing, one of the things I loved about the book is how practical it is like how many specific proposals there are. And, and one thing that you emphasize, and I think rightly so, is that there's not one silver bullet.
There's not gonna be some, you know, as the internet likes to tell us, one, one weird trick that's gonna solve all, all of this, it's gonna kind of be like all interesting problems cobbled together from a bunch of different dimensions.
So, so let, let's start with, let's start with Congress. Okay. So, so, maybe one way of asking this is why do you start with Congress? Why is Congress the key here? I mean, I, you know, I think that you don't have to be professional politics watcher to have not a lot of faith in Congress's ability to do anything these days, let alone check another branch of government. But why, historically has it been so important and even reasonably effective sometimes in national security oversight, and, and how is it gonna do that? Or how can it continue to do something like that in this double black box age?
Ashley Deeks: So I start with Congress because I do think that they are the or ‘it,’ Congress is a they not an it, that they are the most traditional, powerful, historical check on the executive in the national security space. They have the power of the purse. They have the power to legislate, they have the power to investigate, convene hearings, hold up presidential nominees and so on.
So when they are operating at full speed they can be pretty effective. And you know, we can think back to the 1970s where in the wake of massive problems with what the executive had been doing in secret, Congress is able to convene the Church Committee, produce a credible report of a thousand pages plus.
House did the same thing they produced the Foreign Intelligence Surveillance Act, the War Powers Resolution, starting to, to get after the covert action problems. So, maybe, maybe this is a part of the book that is less realistic than, than I would like, but they are the kind of traditional, most powerful counterweight to what the executive is, is doing in secret.
You know, the, the four public law values again, that I focus on are legality, competence, accountability, and requirement of justifying your action. And so if you think about a couple of the key framework statutes that Congress has enacted, where it worries that the executive is making important key decisions, high stakes, foreign policy, and national security decisions, they have developed the Foreign Intelligence Surveillance Act and a, a covert action statute, right?
The concern is that you would have actors inside the executive branch authorizing covert actions that end up taking the U.S. into a conflict or just producing really bad outcomes, maybe without the knowledge of the president and so on. So they've, they've, enacted a statute that says, no, that's not how we're gonna do it, a president has to make a finding and so on.
So it struck me that, that some of maybe a, a potential way to deal with the double black box here on Congress's side is to use those statutes as a kind of model to require the president, himself or herself, to sign off on very high risk uses of national security AI acting on recommendations from a range of national security agencies and their lawyers, which is kind of like how the president signs off on covert actions today.
The statute could require the president to notify Congress of those approvals and keep Congress currently and, and fully involved, for example. So that at least makes sure that there is another set of actors that can check the legality, competence, and create some accountability for the choices being made to, to deploy these systems.
Alan Rozenshtein: And I guess this is gonna be a theme as we go sort of from institution to institution, because I could ask this, I think, you know, for the courts as well and, and for, you know, the, inspectors general and various lawyers and offices of goodness, right, as I think is a great phrase that's sometimes used talk about's gonna be inter-executive branch watchdogs.
This all seems helpful in piercing the, the outer black box in piercing the national security black box, but not the AI black box itself. Right. Because again, assuming that these things remain unexplainable, right which is why I think this, this, it is an interesting empirical question. Sure, Congress can have the president sign off. Congress can drag such and such in front of the, you know, select committee on intelligence and yell at them about, you know, what's, what Skynet is doing.
But at the end of the day, if the answer is, well, we're not entirely sure. You know, we think it does okay, it's error rate is pretty low. No, I can't tell you why, you know, it, it, it authorized the Hellfire missile in that instance versus another instance. You know, is there anything in this, in these oversight regimes that can get at the, the inner black box, not just the outer black box? And maybe the answer is no, and, and you know that that's just a trade off.
Ashley Deeks: I would say probably not directly, but I can think of a couple ways indirectly it could be relevant.
So if Congress regularly asked the actors who were coming in front of it to describe to them a particular high-risk use of national security AI, okay well, what was the system's explanation for why it shows this target as opposed to that target? The officials then have an ex ante incentive to make sure that their systems produce some form of explanation so they can report to their congressional overseers 'cause they know those are the kinds of questions that Congress is, is gonna gonna ask.
The other thing is, it's sort of a, a blunter tool, but if Congress convenes a hearing and the assistant secretary of whoever comes up to explain it, and the person has to say yes, we've used the system several times, sometimes it's done a really important, it's made a very accurate recommendation we were able to find this, you know, missile site in the mountains and a couple of times it made a mistake and we bombed a village, I, I don't think DOD would necessarily do that, but just hypothetically.
Congress can say you have to stop using this system until it reaches a particular, you know, level of accuracy or com- competence or confidence. And we're gonna legislate to turn off the power of the purse until you fix that. So it's prodding those who are most able to look inside the AI black box to do better. But as, as we already talked about, there are, it may be ultimately impossible to do that, and you can only use sort of proxies to figure out the internal workings of the system.
Alan Rozenshtein: So I, I actually think this is a good opportunity to ask my, my China question. And the China question is this, let's stick with Congress for a second because they are of course, the, sort of the most political, and I don't mean that in a pejorative way, just the most ambiently political, for obvious reasons of the branches.
Do you think there will ever be realistic pushback or oversight of these AI tools in the context of what feels to me sometimes to be the only, the only source of bipartisan consensus in Washington, which is that we are in a Second Cold War or however you wanna call it with China, that one of the battlefields in that Cold War is going to be AI and that, you know, the metaphorical occasional village that we accidentally bomb, you know, it's a bummer. Obviously we'd rather not. But certainly we're not gonna do anything to take the foot off the gas pedal as long as, you know, we're in this China AI race.
Ashley Deeks: Right. So I, I agree with you that there is, if there's one thing that Washington can basically get behind, its consensus against China and Kristen Eichensehr and I wrote a piece called Frictionless Government, where we kind of take the China example as a launching point for the, for the piece.
You might well be right. I talk about a range of different actors and bodies who could provide some pushback and checks. Who would not be captured by that consensus. But I do think there will, there are, in a scenario in which China is leaping ahead on AI tools including national security AI tools, Congress may not choose to act.
I do talk about a non-traditional check of foreign allies who we often need for things and who can often see behind our sort of veil of secrecy, we can sometimes see behind theirs. I'm thinking about, you know, NATO allies AUKUS, Five Eyes.
Alan Rozenshtein: So it's because we're cooperating, not because they're spying on us, though, presumably everyone is also spying on everyone.
Ashley Deeks: It’s voluntarily, voluntarily saying, you know, we, we share a military alliance. You need to understand how our systems work. We wanna understand how yours work.
And I can imagine even where there's a lot of consensus in Washington about needing to push forward as fast as possible to combat China, that some of our allies might be cautious about that, including for legal reasons and their caution can potentially infuse some caution in us or, make us think hard about some of the choices we're making if we don't wanna lose their cooperation in in places X and Y.
One other thing to, to add here, and I think this is true. Maybe it's too Pollyannaish, but other people I think like Michèle Flournoy and Avril Haines have have made this point, is there may be a real kind of political and economic advantage to the United States in making sure that our systems do comply with public law values, that we will then have the kind of gold standard system that our own military is comfortable using, right?
They don't wanna use systems that turn on them, for example. And foreign countries may in a choice between, do I wanna buy China's aggressive, maybe less tested, maybe less verified AI or the U.S.–
Alan Rozenshtein: Killer AI with socialist characteristics is how I think about it.
Ashley Deeks: Splittist characteristics.
Alan Rozenshtein: Nice. That's good.
Ashley Deeks: –that, that maybe there is a, a longer-term advantage there if we can keep our eye on that ball.
Alan Rozenshtein: Let's talk about the courts for a little bit. Obviously the courts play a role in overseeing national security, especially the intelligence context. You have the foreign intelligence surveillance court. At the same time though, you, you know, I, I'm reminded of, uh, this great quip that, Justice Kagan made in, uh, forget, uh, I forget it was the, the Net Choice argument, or maybe it was the year before in the Gonzalez v Google argument that we're not the nine greatest experts on the internet.
And so I do, I do wonder, you know, given already how difficult it is for the courts for various institutional reasons to really police the executive. You know, how much harder is it when you're asking a bunch of generalist lawyers to start peering again, not not just into the first black box of national security, but into that, that central black box of the AI systems themselves.
Ashley Deeks: Yeah, I, I don't put a lot of weight on or hope in the courts having a significant role here, I guess with maybe three minor caveats.
So the first is, I wrote a piece a while ago that suggested that courts may help drive actors towards explainable AI as they start to face cases, maybe involving experts, expert witnesses, or where the government's done something maybe in the Social Security Administration using an algorithm, and the court says, well, why did the system make that recommendation? The person says, I don't know. Court says that's not good enough.
So it could be that by asking certain questions and demanding certain evidence in court, the judges might drive the sort of computer science world towards greater explainability in these pockets of cases that come up.
Second is, you know, the FISC naturally lives behind the inside the national security black box. They conceivably will confront cases in which the FBI or the CIA comes seeking probable cause, Justice Department would do it, of course directly, but maybe the intel community indirectly and saying, we've used AI tools, we think this is probable cause, and the FISC says, say more.
So just kind of testing that at a, at a relatively, I don't think you have to be a really high tech person to, to ask questions in that space. And I think the, the FISC judges are pretty sophisticated about surveillance at this point.
And then the third question I have in my mind is, are we gonna see some public cases involving non-national security AI that produce judgments, that shed some light on how the government decides that it should operate behind the veil of secrecy in a kind of parallel situation? So just to be more specific, maybe there's, something comes out about facial recognition in a, in a public case in federal court, and the holding is such that it gives people inside the government a little bit of pause to say, well, we've kind of been doing something different from that, maybe we should, maybe we have some legal obligations under the Fourth Amendment that weren't the, weren't what we thought they were. So a little bit of slop over potentially in those, in those sort of public cases causing the government to rethink what it's doing in, its in its national security work.
[Scaling Laws Ad]
Kevin Frazier: AI policy debates these days move faster than D.C. tourists chasing shade in the middle of July. If you're struggling to keep up with AI law and Regulation, Lawfare and the University of Texas School of Law have your back with a new podcast Scaling Laws.
Alan Rozenshtein: In the show, we dig into the questions that are keeping folks like Sam Altman awake that are driving legislative policy and steering emerging tech law. I'm Alan Rozenshtein, Lawfare Research Director and a law professor at the University of Minnesota.
Kevin Frazier: And I'm Kevin Frazier, the AI Innovation and Law Fellow at Texas Law, and a senior editor at Lawfare.
Alan Rozenshtein: We've lined up guests that you won't wanna miss, so find us on Apple, Spotify, YouTube, wherever you get your podcasts. Subscribe and don't miss what's next.
[Main Podcast]
Let's go into the executive branch itself. I think one of the, one of my favorite parts of your book, and, and maybe I'm biased because I was one of these people, I mean, and obviously very junior, but this was at least my experience, was that the government, the executive branch itself has all sorts of internal checks and obviously you can debate about how effective they are and not effective and obviously we're living in an interesting constitutional political momen. But, you know, speaking generally, obviously the internal self-regulation is, is very important.
Now, within that, you describe a potential power shift away from lawyers and toward the engineers who build these systems. And so I'm curious how this kind of, you know, code is law to quote the, the, you know, great tech internet scholar Larry Lessig, how the sort of code is law reality changes the role of the government lawyer, right? Like what, what does the government lawyer, lawyer need? I mean, are we all gonna have to get master's in computer science or, or does the government lawyers just become less, less important in, in this world where so much relies on how the technology actually works and, and who is building that technology?
Ashley Deeks: So I think this is a little bit of a black box. Maybe this is our third black box.
Alan Rozenshtein: Oh my God.
Ashley Deeks: I know.
Alan Rozenshtein: I, I think it becomes a black hole at some point. I think once you hit, if any more black boxes, a giant, a black hole territory.
Ashley Deeks: Dark matter. So just to second your point, that there are of course a lot of checks inside the executive branch that are not always appreciated from the outside and that do, I think really, really important work. And some of that is interagency ‘Lawyers Group.’ Some of it is just the, the tug of different agencies sharing the same broader goal and a lot of different views as to how to get there.
Each of their agencies thinks there's a, a better way to do it. And the idea of lawyers kind of falling a step behind as these, you know, systems as we shift more and more towards relying on AI for all sorts of things, some of which are national security decision making, and some of which are just kind of lower level inputs into it.
It's hard for me to fully know where the lawyers can and should be inserting themselves into this process. Ideally, it would be right at the front end, right? It would be somebody saying, look, we think we're gonna develop x kind of system for the State Department that will be heavily infused with AI. And we wanna have before we actually start building the system we wanna sit down with a lawyer and understand the, the basic law that attaches to this kind of scenario. What kinds of outputs would be most useful in helping the policymakers and the lawyers get where they wanna go and so on.
I'm skeptical that that is how it works right now, especially because a lot of these tools are being acquired from private companies. But I do think that we kind of have to get ourselves to that space as lawyers if we want to maintain some relevance and not just be on cleanup duty of saying like, oh, well, we've already acquired it, we've already started to use it, now we see there are these problems. How can we clean that up? It would be far preferable, I think, to do it on the front end.
And just at a more macro level, I do think there's gonna be a power shift towards the agencies that are heavily using these tools and away from the agencies that use them a lot less. You know, I think of that means more power to DOD and CIA, less power to Treasury, Justice, State. But that's a hypothesis, I don't, I don't have direct evidence of that.
Alan Rozenshtein: Just to jump off that last point, why, why are you making that cut? Right. I, I'm actually just curious, just for a second to explore, you know, your intuition that DOD and CIA, probably add NSA to that, is gonna be on the sort of pro-AI side, but Treasury does a ton of analytical work.
You know, State, State can do a lot of, you know, can, can make use of a lot of AI in analyzing, you know, foreign relations and open-source intelligence. I, I mean, to, to me, the, the nature of AI, especially as we get closer to AGI is that it can do any cognitive task. And it's not obvious to me that, you know, the tasks that DOD does are more or less cognitive than the tasks that DOJ does.
Now, there may be cultural differences and, and maybe that's what you're getting at, but it just jumped out at me that, that the, the way you bucketed those agencies was not int was not sort of intuitive to me.
Ashley Deeks: Yep. So I guess I, I'm thinking of at first cut, which agencies are, going to use, develop faster, the kind of higher risk AI and that I think was the CIA, DOD, NSA bucket.
Alan Rozenshtein: The, the one, the ones that blow stuff up.
Ashley Deeks: Yeah. And do other, right other intelligence related activity. Although, of course, you know, Treasury and State and Justice all have intel capacities as well.
But I think also the, the cultural point, the, if we took a slice right now to see how far along DOD is on thinking about AI, how to purchase it, when to use it, what the rules of the road are, I think DOD is probably further down that road than State. DODs issued a policy on this, IC issued a pol, you know, series of questions on this that its users and developers should be asking five years ago.
I don't think that State and Treasury and Justice are thinking as aggressively, actively about this, but I totally agree with you that there can be really important uses for, for all of these agencies. And I would urge them to pursue them so that in part they're, it's starting to infuse their culture more so that they can kind of keep up with their counterparts in other agencies.
Alan Rozenshtein: Let's stick from the perspective of a lawyer for a second, and maybe this is just sort of professional vanity, but I still think it's, it's helpful because I can imagine a world in which I'm a lawyer and I'm very worried that the folks in my building are using AI that I can't understand. Fair. I can also imagine a world in which I'm a lawyer and I'm relieved that people are using AI because at the end of the day, what I want is legality.
I want systems that follow whatever rules that I have come up with or that I believe are operative, and it's not obvious to me. And here, you know, we, we kind of briefly touched on our conversation about, well, humans are black boxes too. It's not obvious to me that AI is necessarily more inscrutable than than humans are. I mean, at the very least, I can train an AI system in a much more direct way than I can a, a human system.
And, and you know, when you look at the history of predictions that AI can't do such and such because it's too difficult and amorphous, right? Those predictions have generally not fared well as we've continued development. Right. Just to give an example. We have self-driving cars now, right. Obviously there are lots of technical impediments, but they work. This is they, they like, you can go to San Francisco and drive a Waymo and they are at least an order of magnitude safer than human drivers, right?
It seems to me that if you just amassed enough data and that data could be, for example, laws of war targeting data, right? You could train a AI model to be at least as and potentially better, and more importantly, improving in a way that a human targeter might not be.
So I can imagine if I was a lawyer, not just preferring an AI system rather than Bob, right, who I can train as much as I want, but God knows what he's thinking inside his head. And I might also to push it a step further, be worried about overcautious use of, of AI, right? Just again, to go back to the self-driving car analogy, one might be worried that we focus so much on, you know, one bad Waymo accident that we delay the rollout and then a bunch of people die because the alternative to Waymo, is not driving, it's humans driving and humans are horrible drivers. So I was just kinda curious, love for you to respond to that thought.
Ashley Deeks: So the argument that you've just made, I think is in large part the argument that the U.S. government makes for why we don't need a new treaty to regulate lethal autonomous weapon systems. There I think the argument is that there is existing law on the books, laws of armed conflict, some key principles, distinction, proportionality, precautions.
And it could well be in the short to medium term that the AI systems will be more effective in complying with the laws than and produce fewer civilian casualties than humans would not subject to being tired, not subject to having seen their friend die and so on. So, I mean, I, I, I think the U.S. government's argument here is a reasonable one that we don't know whether the systems could in fact get to that point, and I think what you're saying is there's reason to think maybe they can.
I have thought a bit about this idea of coding the law of armed conflict. Notice, of course, this is just a slice of the relevant law that we would wanna think about, right? It's, there are lots and lots of constitutional, international statutory laws that would be relevant to the kinds of programs we've been talking about.
But let's just take the laws of armed conflict. Someone named Lisa Shay and a couple of her colleagues, she, I think is a, is or was a professor at West Point, did this experiment where she got three groups of coders to code what's a very highly determinant law, so a speed limit. She took 52 computer programmers, put them into three groups, and then asked them to encode a speed limit and determine violations based on real world driving data.
She told one group to implement the letter of the law, one group to implement the intent of the law, and a third group, I think, gave very specifically crafted specifications on which to base their program. And the three groups produced wildly different numbers of tickets on the same group of drivers. And the whole point is to show how many decisions have to be made during this process that you wouldn't necessarily think of.
So, it depended whether the groups decided to treat repeat offenses within some number of miles as one violation or two, or whether the duration of the violation mattered. So three seconds of speeding versus three minutes of speeding. Did they both get tickets? Did only one? Whether to take into account weather conditions.
I think it, it just goes to argue that it will be very hard, based on my current understanding of, of how these systems are built to, to get us to a place where we could, you know, use an autonomous system confidently or confident that it complied with the laws of armed conflict. But we, we should never say never on this, right. And, and I think the way people have thought about this particular problem is it might be that there are different, you, you can sort of geofence the use of these things and so you would feel more confident that there weren't civilians in the space where the system was operating, for example.
But if the larger point is that there may be ways to bring these systems closer, to code them in ways that that bring them closer to public law values, then I agree with that and I think that would be great. And I think most lawyers would and should agree with that as a goal.
Alan Rozenshtein: So I wanna finish by talking about the international dimension to all of this, because I think one of my favorite parts of the book is at the end when you talk about how to think about international cooperation and competition in this domain, and you note that often the discussion immediately jumps to some analogy based on nuclear non-proliferation treaties. And you know, if we're really worried about killer robots, maybe, you know, we should treat them like we do nukes. And obviously nuclear non-proliferation isn't perfect, but it's done pretty well.
But you argue that, that that's a, that's a bad analogy and that a better analogy is how we deal with cyber weapons and cyber threats: which is to say not great. And so just unpack, if you would, why the nuclear analogy doesn't work, why the cyber analogy works better, and kinda what your, what your outlook is for the, uh, possibility of, of cooperation. And, and again, I will just again, cite the, the, the looming China effect, which I just think is, is always such an important political reality here.
Ashley Deeks: Yeah. Well, and, and thanks to you, you, you and I, I, I wrote an a, a paper for you and you were the Lawfare editor of it that ended up becoming this, this part of the chapter, so a belated thanks to you for your inputs on, on that.
But so the idea is basically once we recognize this challenge of the double black box, you might immediately wonder, well, are there things we could do on the international plane to shrink the size of the double black box? That is to take certain, certain uses of AI, certain systems off the table ex ante, which would make our domestic box smaller.
And, and you're right. So I, I think there is some work to be done on the international plane, but I think it's important not to oversell how successful those discussions are gonna be in kind of limiting where states go with this. And it was, it was striking when I started thinking about this, that the key analogy, as you said was to nuclear weapons. And so the argument goes if the Soviet Union in the United States were able to reach a number of agreements about restricting the use of nuclear weapons, size, location verification, then surely those types of states, Russia, U.S., maybe China, et cetera, should be able to come together and regulate what seems to be a system or series of systems that could pose just as serious a risk to, to the world.
I just, I that for a range of reasons, that analogy seemed misplaced. Partly because nuclear weapons are not dual use systems. Largely, they are hard to make, built by governments, they are things you can count, AI, not that at all. Right? Sort of the opposite of that. And that the better analogy it seems is, is like cyber. And indeed at some point cyber and AI may overlay each other in terms of these tools.
So I think we've seen pretty modest and pretty non-linear progress in in how we have gone about trying to develop cyber norms internationally. We've used a couple of different buckets of tools and I think we will probably see the same use of those buckets in the AI space. In fact, I think we've already started to see some of those same uses of the bucket.
So first, I think some, there is some level of broad multilateral agreement that states should apply existing international law to these tools, that may mean laws of armed conflict, it may mean jus ad bellum, it may mean human rights law. Debates linger about how exactly to apply international law to these tools, but there's a kind of broad conceptual agreement. I think there's also been an effort to develop in both the cyber setting and in the AI setting, some somewhat new, pretty vague non-binding norms among a wide group of states.
For cyber, it's in the UN group of government experts. For AI, it's in the convention on certain conventional weapons forum. I think in both cases we'll see these work in mini lateral coalitions. By that I mean, groups like NATO or the Five Eyes to develop more specific norms to be more concrete about how we should engage in testing and verification and senior level approval and, and so on. But that will happen in private.
I think we'll see states make unilateral statements of policy. We have seen a lot of that in cyber. And I think we'll see some of that in, in AI as well. We might see sanctions on bad AI actors. We've seen sanctions on bad cyber actors. I can't imagine there'd be a reason not to, to do that in on, on misuses of AI systems too.
And then finally we've used criminal law on the margins in, in, in cyber and maybe we would see that in misuses of AI too. But, I think your final comment, before turning this over to me was about China and I think at, at at bottom, what's driving this inability to really develop a robust kind of series of regulations on this is just a deep mistrust at this point of the, of really important players in this space, right?
So the, the P-5 and Israel and Iran and North Korea, these other players that have a lot of these tools or are looking to build these tools are not in a, in a place right now to sit down and have a serious conversation and trust each other that they will comply.
Alan Rozenshtein: I think that's a good place to leave it. Ashley, congrats on a really excellent and timely book, and thanks for coming on the podcast to talk about it.
Ashley Deeks: Thanks so much for having me. Enjoyed the conversation.
Alan Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfare media.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation. Our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org. This podcast is edited by Jen Patja.
Our theme song is from Alibi Music. As always, thanks for listening.