Cybersecurity & Tech Executive Branch

Lawfare Daily: Rebecca Crootof on AI, DARPA, and the ELSI Framework

Kevin Frazier, Rebecca Crootof, Jen Patja
Friday, July 19, 2024, 8:00 AM
Will there be an AI arms race?

Published by The Lawfare Institute
in Cooperation With
Brookings

Rebecca Crootof, Professor of Law at the University of Richmond School of Law and the inaugural ELSI Visiting Scholar at DARPA, joins Kevin Frazier, a Tarbell Fellow at Lawfare, to discuss the weaponization of emerging technologies and her role as the inaugural ELSI Visiting Scholar at DARPA. This conversation explores the possibility of an AI arms race, the value of multidisciplinarity within research institutions, and means to establish guardrails around novel uses of technology.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Rebecca Crootof: Take a second to have a conversation about, okay, you're trying to do this. And if you're, if you're wildly successful, what is else is that going to mean? What are gonna be the implications of that? Who's gonna benefit from that? Who could be harmed by that? Who's gonna feel threatened by that? Who might want access to that?

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, assistant professor at St. Thomas University College of Law, and a Tarbell Fellow at Lawfare with Rebecca Crootof, professor of law at the University of Richmond School of Law, and the inaugural ELSI Visiting Scholar at DARPA.

Rebecca Crootof: The fact that that DARPA has hired someone to raise these questions and to encourage everybody to be thinking about these things, and again, not to say there aren't plenty of folks at DARPA already doing this, but someone to make it a little bit more systematic speaks to the fact that DARPA feels this obligation.

Kevin Frazier: Today we're talking about the ethical, legal, and societal implications of the militarization of emerging technologies such as AI.

[Main Podcast]

It's a common observation that AI is an dual use technology. Its commercial and military uses are seemingly limitless. Those substantial possibilities invite more grounded analysis into how and why the military, specifically the U.S. military, is thinking about AI and other emerging technologies. That's why I'm so glad to be talking with Rebecca Crootof, the inaugural ELSI Fellow at the Defense Advanced Research Projects Agency, better known, perhaps exclusively known as DARPA.

Rebecca, let's start with your current role at DARPA. What the heck is an ELSI fellow?

Rebecca Crootof: Well, hi Kevin. Thanks for having me. And yeah, happy to talk about it. So you've probably never heard of it before because I am the first one the ELSI Visiting Scholar, where ELSI, like, everything at DARPA is an acronym.

So ELSI stands for Ethical, Legal and Societal Implications, and it's an entirely new position. And I've had a rather incredible amount of, of freedom and autonomy and setting up what it means. And so the basic idea behind it and the mission I've sort of taken away is to continue doing what DARPA does really, really well already.

Which is thinking about what new technologies could solve, what problems and also in doing so, thinking about the side effects of those technologies and to take that, do what it's doing well do it even better by making that process a little bit more explicit, a little bit more structured. And so part of what I've been doing is creating a set of practices and, and agency-wide policies and processes for thinking through side effects a little bit more systematically.

Kevin Frazier: And Rebecca, forgive me if I get this way off the mark, but I'm assuming you didn't grow up thinking, I can't wait to one day be the inaugural ELSI fellow, I wanted to be president or in the NBA. Neither of those worked out. So how the heck did you end up in this role? What was your path to becoming an ELSI fellow?

Rebecca Crootof: Yeah, so full disclosure, ELSI is, is a known acronym outside of DARPA, and it's a field, but it is not an acronym that I knew about until shortly before I applied for this position which is always a little embarrassing to admit to folks who have known about the acronym for far longer than a year and a half ago.

My work I would describe as largely oriented by thinking through lc issues for the past decade or so. So my day job in my normal life is that I am a law professor at the University of Richmond, and my work focuses generally on the intersection between new technologies and the law with a particular focus on new military technologies and how new military technologies in the law of armed conflict, each influence each other, each push the other one to evolve in various ways.

I've been particularly interested in accountability for accidents in war and how new military technologies raise that issue with greater salience. And so I've been thinking about, you know, the side effects of technology for a pretty long time and, and so that's where I come to this position from.

Kevin Frazier: You mentioned that this is just a short term role in terms of you're here for a second, you're visiting quite literally, and then you'll be back to the full-time job. So how would you describe your jurisdiction, your authority? What sort of lasting mark are you going to have on DARPA? Are you going to have to hand this off and just hope that the next fellow takes the reins from you or what sort of actual concrete actions can you make?

Rebecca Crootof: So there's a little bit of both in that, in my answer there, right? So on one hand, well, yeah, I'm hoping I do something that's worth continuing, right, that and so as I mentioned, I've been creating a new process that every single, the word we use at DARPA is precedental.

Every program that is in formation is being structured where the metrics are being evaluated, where the types of tests that something needs to succeed at to continue or being determined though, during that process of ideation and iteration there's a lc conversation. And that there's a, ideally we bring in an outside expert with relevant sub subject matter expertise because that can be hugely informative and sometimes they're stuck with me, right, who, I've got my own areas of expertise.

But you know, they, they certainly don't cover the full gamut of everything DARPA does. But even then, it can be useful to be the person to come in and just take a second to have a conversation about, okay, you're trying to do this and. If you're, if you're wildly successful, what is else is that going to mean?

What are gonna be the implications of that? Who's gonna benefit from that? Who could be harmed by that? Who's gonna feel threatened by that? Who might want access to that? What are gonna be the sources of error? What are gonna be the sources of vulnerability? How might this be weaponized by us or against us? What kind of countermeasures might we, we want to develop? So really taking a second to just. And when I say a second, I mean in a very extended second to talk through lots of extended seconds.

Kevin Frazier: In government, it's course's. An extended seconds. Yes.

Rebecca Crootof: To think through the implications of what would success mean and also what are, what are the risks associated with this program and to this program, and how do we minimize and mitigate those in how we design the program itself.

Now couple things on, on your question about what does it mean, what, well, for success for me, if that's, if that's all I do is create this process, it's not that that's a bad thing, but if everyone just thinks, oh, doing ELSI is having this conversation and filling out this worksheet, and then you check that box and it's done, that would be a failure.

So part of my goal is trying to create a culture of asking these types of questions, not just at one point in program development, but all through program development. All through program implementation and having not, and having it not just be the responsibility of one person to raise these questions, but expecting everybody involved in a program, the manager, the the technical experts, the assistant, the performers to raise and think about these types of questions all throughout the course of the program. And so the, the process is a sort of quantifiable thing.

Okay, we did that right? So I can list that, but my actual goal is, is shifting the culture more broadly to make this process that again, already is happening more explicit, more regular and then someone else will come in, in January. We're in the process of interviewing folks right now to be the next lc visiting scholar, and they'll take a look at what I've done and they'll come at it with an entirely fresh perspective and they'll say, okay, this seems to be working. Let's keep doing that. I hope that, I hope they say that about at least one thing.

And they'll say, no, this one, this isn't working. Let's scrap that. And also, let's try this third, fourth, fifth, eighth thing that Rebecca never thought of. And that is the culture at DARPA. People come in for two year terms that can be sometimes extended to four years, and then they rotate out again. And it's, it's very much a feature, not a bug of how DARPA works, that there's a lot of high turnover and a lot of fresh approaches and, and people coming in with fresh perspectives on, on established ideas and established programs.

Kevin Frazier: Not to tap into, let's say, some popular stereotypes, but I think if we were to do a quick

Rebecca Crootof: But tapping into popular stereotypes,

Kevin Frazier: but someone had to, if I were to go pick a random Joe or Jane on the street and say. Hey, there's this new program at DARPA. They slow down. They ask about these worst case scenarios, and then they maybe step off the pedal on this development of new technology.

They'd say, yeah, sure they do. You know, it's the military. They're just gonna pedal to the metal until we come out with the latest and greatest technology, which is all to say. How is the receptiveness of DARPA employees been so far? Is it, oh no Rebecca's coming down the hall quick, close the door, or, you know, don't invite her to that meeting. How, how is the receptiveness going so far?

Rebecca Crootof: So I had the experience of working on what I would now call an lc working group for one DARPA program. And I didn't know this at the time, but, but part of the goal of the program was to try and get a sense of how useful was it to have input on a program.

And so they had a little bit of an AB testing that we didn't realize where they had one track that they were going to incorporate the lc groups ideas on, and another one that they weren't. I didn't know this as a member of the lc working group, but I, you know, we, we had a bunch of suggestions and a bunch of recommendations and every single suggestion we made.

What I learned later was that the performers, the people who are creating the technology trying to achieve the goals that DARPA sets for them. Every single recommendation we made, they said, that's not an lc thing. That's a good engineering thing that's gonna make this technology better. And so they, DARPA ended up scrapping the AB testing because whenever we made recommendations as members of the lc working group.

The performers tended to want to integrate them. They didn't want to create the thing that didn't have what we had suggested, and it ended up being that the performer that adopted the most of our recommendations was the only one to hit all the technological metrics and all the goals and proceed to the next phase of that program.

So this is firsthand experience with how having lc conversations and what you characterized as as slowing down, isn't necessarily slowing down and actually. One, the program was right on timeline. There was no slowdown at all, but two, made the resulting technology so much better in in ways that the performers, I think they're gonna put out a little podcast about it actually, but from DARPA, where the performers will talk more explicitly about how useful it was to get those outside perspectives on things that they would not have thought of otherwise.

So I came into this role firmly believing of LC as an enabler, right? That it is beneficial to think about potential concerns, to think about potential barriers ahead of time, to think about sources of accidents and vulnerabilities, and address those from the beginning. But also that it's not just about being you know, the fun police, right?

The person who comes in and says no. It's also about anticipating alternative uses, right? Thinking about how something that if someone is thinking about a defense problem and they're coming up with a defense solution. Well, what happens when you also think about how the solution for, in, you know, how this thing you're creating could also be a solution in the public health space, in the economic space, right, in a different area.

Sometimes it's also just about identifying unknowns impacts of the technology that, you know, may or may not exist. But if you don't measure them, you won't know. And if you build measuring them in from the beginning, then you've got a much better sense of what you've created at the end of it.

And so sometimes these conversations just lead to thinking about unknowns. And so I, I came in thinking, I'm not here adversarily, I'm here to make these programs, you know, to make these programs better. If that doesn't sound too full of myself, I guess, or anything. But I wasn't sure how I would be received.

And I have had so many phenomenal conversations now with different program managers and when I ask how are they're gonna handle something or have they thought about something, I tend to get either a, oh yes, we've thought about that. Here's what we're doing about it. And that is very, you know, that, that makes me very happy to hear.

And then sometimes I get the, oh, oh, I hadn't thought about that. And then they start writing something down, and then we have a great conversation about how to address whatever it was that I raised.

Kevin Frazier: This brings to mind a effort ongoing in academia called Public interest technology, where, for example, you have an ethicist come into CS 50 at Harvard, right?

And help raise their hand for these future engineers at Facebook or at Twitter and say, ah, this is how you think about building an ethical product. This is how you think about these moral quandaries in the heat of the moment. And sometimes that approach has been challenged as perhaps being too early in the pipeline, right?

You're, you're introducing all these important concerns, but there's a difference between learning something sophomore year and then being able to implement that years down the road in a very profit seeking setting. So how would you respond to the idea that this is great? I think everyone would say, yay, ELSI. I think very few people would be outwardly opposed.

But is this too early in the pipeline of tech development and then deployment. Because if we think about some of the concerns for the use of, let's say AI or AI informed weaponry, the concern is the, the folks using it in the field. So how is this having downstream effects on that use of the technology, would you say?

Rebecca Crootof: So I'm, I'm finding myself starting to develop a couple of catch phrases.

Kevin Frazier: That's okay if you, you gotta, you gotta knock those out. You're a professor, after all, you gotta have good points.

Rebecca Crootof: I'm a professor after, all right? So I always have to put things in bullet points to figure out when I'm making my PowerPoints.

But, but one of the phrases that I find myself coming back to a lot is early and often, right? That yeah, if this was, again, going back to this idea, if this is one conversation and then you can say, okay, we've done the ELSI part, right? That would be a failure. That is not going to be a useful way of actually thinking through and addressing the implications of the technologies the DARPA is creating and fostering.

Instead, it needs to be an ongoing conversation, right? So I don't think, I don't think there's such a thing as too early for thinking about the potential side effects of something. But you know, I am a huge fan of science fiction, so I'm a real fan of thinking through the side effects of things that are not, are not invented, may not ever actually be invented.

And I really enjoyed those stories. But I think as you're creating the program, there's an opportunity to think through things and make decisions and how a program is structured that can have dramatically different impacts on how accessible a technology is. You know, we've talked about maybe for some that are hugely beneficial, maybe it's worth making price point, a metric for success.

And so if you incorporate that from the beginning, that changes. The course of how that technology is developed and results in something that might be far more accessible than it would've otherwise. So early is important, and then the often is really important, right? So it's not just me talking to folks at the beginning as they're thinking up ideas, but as the program goes on and.

Things are discovered. You know, we, we have proof of concept and that raises a whole bunch of different questions because we've got a proof of concept in, you know, this format when it was kind of envisioned in that format. Well, what happens? How does that change the implications if the format has changed or we know that it's possible in this way, but not that way. Well, that cuts off a whole interesting conversation about potential implications, but then raises a whole bunch of other ones.

Kevin Frazier: One thing that I'm curious about, just to dive in a little deeper before we move on to some other exciting projects at DARPA in this space. I'm just curious. So we're speaking in early July of 2024 and we recently saw news from the Defense Post actually that DARPA has unveiled a flying wing drone project featuring hybrid electric propulsion that converts fuel to electricity.

My mind is already blown, just reading that sentence. I'm curious, for a sort of new project, is it that the developers come to you first and say, Hey, Rebecca, we're starting this new initiative.

We think it's gonna be this crazy awesome drone. Tell us your worst case scenarios. Or do you kind of have your hands in all these different pies and you say, ah, I see you're beginning to work on something. I'm knocking on your door now. Or is it a little bit of both? What's the process of kind of triggering that lc review?

Rebecca Crootof: So that moment that you talked about, that feeling, just thinking about this makes me go, wow, that is my daily life. I get to go from a conversation about this. Wow. To a conversation about that. Wow. And is. Such a, an incredible experience to talk to all these brilliant folks who are trying so hard to solve so many different types of seemingly impossible questions.

So it is, there's just an element of this that I just have to like share that moment with you because it is, it is a rather incredible space to be in, in terms of what triggers the lc review. Or the lc conversation. It's everything. So it can be folks will come to me and say, okay, I am wrestling with this idea.

I think I've got an idea. Can we just chat and then we'll just have a conversation about that. Sometimes it will be the program is 95% of the way completed at right far after the predecisional stage. And someone will say, okay, this just came up. What do we do about this? Or how should we think about that?

And we'll have a conversation about that. And then I would say, but the thing that is standard, right? That is becoming common across all programs. Is this, you know, process that is at one step in the program development and everything at, at DARPA is a little so generic. Unto itself there's six different tech offices and they all have their own approaches and procedures.

And so one thing we've been figuring out is where does this fit in best for all the different tech offices and all their, their different existing processes? But at some point. There is a period where we do what I'm calling an lc assessment which is just to memorialize a conversation where we make sure that I or the other, they also sometimes bring in outside subject matter expert advisors.

They understand the problem the program is trying to solve. They understand the approaches identified as potential solutions. And the advisor and the program team have a conversation, sometimes multiple conversations around. In the near term, like during the program's lifecycle, what are gonna be the things we're thinking about?

Maybe that's it, maybe it involves animal subjects research. Well, you wanna make sure you're in compliance with all the regulations there, right? So you think about that from the very beginning, or maybe you can identify, maybe you're making a medical wearable. Well, that's going to eventually require FDA approval.

Wouldn't it be great to think about that as you're developing it as opposed to after you've shown some technological capability, but you haven't complied with any of the regulatory requirements for approval? So sometimes it's that thinking through in the, in the program's lifecycle, what needs to be addressed, and then there's a bit of a brainstorming exercise where.

We think, okay, let's assume all the technological milestones are met. This ends up getting used or deployed. What happens next? What are the implications of this? And we sort of divide that conversation up into near term implications. Or you identify who are gonna be the deployers, who are gonna be the users, who are gonna be the entities affected by it.

And that could be whole industries, right? That could be particular types of individuals. And then a long term, what are gonna be the implications? What is this going to shift? How is that gonna change strategies? What kinds of countermeasures might be developed? What logistical chains are gonna be necessary, what training is gonna be required, right?

And so thinking through some of those broader impacts as well. And then based on that. We put together a series of recommendations for the programs. You know, maybe you wanna talk to the privacy and security guys about this, right? Or maybe you want to go have a conversation about that. Maybe you want to learn more from general counsel about, about, you know, this legal distinction.

Or maybe it's about the metrics that they set for the program. What constitutes success? Or maybe it's we need to involve some more diverse perspectives on this. We need to have a workshop or maybe an ongoing working group with stakeholders or people, entities that are gonna be affected by this and get their input from the beginning.

Kevin Frazier: Kelsey is not in isolation. As you mentioned from the outset, this is a part of a broader culture of DARPA that perhaps is underappreciated from external folks, those average Joes and Janes I of course mentioned earlier. So to dig deeper into DARPA's current AI related research. I think we both would agree that it doesn't take a lot of imagination for folks to see how AI systems could come to dominate decision making in critical time-sensitive situations.

This phenomenon over reliance on automated systems is commonly referred to as automation bias. This of course, isn't news to you based off of your scholarship or thankfully to DARPA, and so I would love to get more details on the quote, Friction for Accountability in Conversational Transactions Project, which another great acronym, the Fact Project.

What is this? Why should we know about it, and what is it doing to reduce the odds of some sort of misuse of AI?

Rebecca Crootof: Yeah. So I, I myself am biased coming to this having co-wrote a piece with Margot Kaminsky and Nicholson Price called Humans in the Loop, where we heavily critiqued this, like slap a human on it solution, right?

And, and really delved into the issues associated with delegating decisions to machine intelligence, artificial intelligence, algorithms, however you wanna characterize it. And, and one of the big issues associated with this is automation bias, right? If you just have a human in the loop. We, we have a ton of studies that show that they have a tendency to defer to the decisions made by computers and sometimes hilariously, right?

As when folks have driven into the ocean as they're following their navigation guidance. As opposed to paying attention to the fact that they're driving into an ocean. Sometimes very problematically, particularly in high stakes. High speed, you know, environments, which characterizes a lot of armed conflict.

And so there's a lot of interest in. You know, as we talk about or think about appropriate delegation of decision making capabilities, how do we ensure that the human operator or human supervisor is not subject to this automation bias issue? And how we do that is gonna depend on the program.

So some programs, it's all about designing the AI and, and other programs. It's about how, how is data presented? But the one you mentioned fact near and dear to my heart because the entire program is about how do we reduce automation bias that shows up in conversations with large language models, right?

As we get to a space where we're going to be increasingly conversing with automated systems. How do we minimize automation bias in that environment, right? It's one thing to plug in a search term and get a result and look at it in context, and it's another thing to ask Alexa a question or ask Siri a question and get an answer.

And as humans, we are slightly more biased to that verbal answer. We don't think about it as critically. And so fact is all about how do we build in useful speed bumps in conversations with LL enabled agents. You don't have an answer yet, right? That's why we have the program. But I think it's really interesting to think about it from that useful speed bumps aspect, right?

Like we recognize speed bumps. Are useful at slowing us down appropriately in certain environments. And there's not one of us who has not resented a speed bump at some point in our life, right? And so how do you build in friction that doesn't become so frustrating that people stop using it, right? So what does it mean to have productive friction and what will enable folks to think more critically about the conversations they're having without saying no, you know what? This is too annoying. I'm going back to the non friction conversation. And, and skipping, dealing with the speed bumps altogether.

Kevin Frazier: Yeah. Whenever, you know, I hit a tough point in a lecture. I just say, sorry, students, I'm having some productive friction right now. I'll get back to you in a second. I missed that. Really, I love it that it's the, the, the law professor saying that she loves fact is just too, too on the notes, so I'm glad to hear it.

And I guess one fascinating point that this raised as well, and you have written that quote, while individuals at DARPA exercise immense influence over the development of new military technologies, not one of them is subject to legal liability for these choices, and nor should they be under any reasonable men's rea or approximate cause analysis. End quote, I had Mark Berg and Ashley Des both law professors like the two of us. Come on and talk a little bit about attribution and accountability for the use of these different tools. To what extent are you thinking about that with your ELSI hat on, or is that a future Rebecca problem that you plan to return to?

Rebecca Crootof: That is a past and future Rebecca problem that I've been thinking about a lot. Yeah. Accountability for, for accidents. It's always a really complicated and really interesting question. Says the tort lawyer or law professor. It's a really hard question. This, this issue of. When does, when does an individual or group of individuals become responsible for things that they contribute to creating?

And, and I stand by my statement that there's no legal liability for the folks at darpa and nor should there be right under how we conceptualize the American legal system. That isn't to say that DARPA folks aren't responsible for their work and for what they do, but I think it sounds much more in the realm of moral responsibility. And I'm really hesitant getting into this conversation because I am a law professor. That is my comfort zone. I don't do normative –

Kevin Frazier: Who needs normative? Get me outta here.

Rebecca Crootof: Oh yeah. Well, just when you start talking ethics and morality, right? It starts getting a different, it's a different kind of fuzzy than the kinds of fuzzy and gray areas I'm accustomed to, but I think it is an important distinction, right?

DARPA is a research and development agency. It focuses on trying to create breakthrough technologies to give the United States a technological edge. In doing so, I think it does have a lot of responsibilities, and I call them moral responsibilities, to take steps to think about the side effects of what it's doing, think about the implications, minimize harm, minimize accidents, minimize vulnerability, you know, minimize automation bias.

But it, it does not actually control how the technology, it develops. Is used is operationalized, is deployed, right? It's one part of the ecosystem. It can do some things, right? We can identify certain types of misuses and we can make recommendations about what types of rules of engagement should be developed for certain types of systems.

We can, you can build in accountability mechanisms into technology or developing to better understand how someone uses it, how someone using it makes decisions. Or record information about the decisions that they make. But yeah, it, it is, it is a hard issue for someone who's very accustomed to thinking about after the fact accountability, to wrestle with what is DARPA's responsibility when it's not law enforcing it.

But I think. That really speaks to, I, I am just the fact that DARPA created this position, right? The fact that that DARPA has hired someone to raise these questions and to encourage everybody to be thinking about these things. And again, not to say there aren't plenty of folks at DARPA already doing this, but someone to make it a little bit more systematic speaks to the fact that DARPA feels this obligation.

It is not facing legal liability for its actions. It's doing it because it thinks it's gonna make the programs better.

Kevin Frazier: And to get at the outer bounds of ELSI. I have a curious question related to DARPA's recent announcement that it started a $78 million project called the Optimum Processing Technology Inside Memory Arrays Program.

And yes, listeners, that is the optimum program that aims to spur the creation of new types of chips that can more efficiently run AI applications. And so my question is, if you talk to some folks in the AI space, they would say, this procurement of chips, this effort to further and increase AI capacity is in and of itself a very risky activity and is news that to the Chinese or to the Russians might perpetuate this idea of a sort of AI arms race.

So how early, early in the question of what should DARPA do? Does lc come in right in terms of those threshold questions of, well, maybe we shouldn't even begin to press the bounds of this new technological frontier and should instead take a step back and and pause in that regard. Or are you usually after the fact of, hey, you know what, Optimum's happening, buckle up and now come in with your LC analysis.

Rebecca Crootof: I think it's both, but without the fatality of the way you phrased it. That's fair. There's a question of what are gonna, what are the benefits of doing this, right? What are the potential benefits of doing this? What are the marginal benefits of doing this compared to what's already being done outside?

And the rest of the, the R&D environment and the flip side, what are the risks of doing this, and what are the marginal risks, again, compared to what's already being done. And that conversation is going to be different for every single program, some technologies. So Optima is an interesting one because one, one of one of the things I often teach my students right, is that technology is not neutral or often not neutral.

That, yes, both a toaster and a gun can be a weapon, but man, it's really, it's a lot easier for that gun to be a weapon than for the toaster to be a weapon. And man, is it really difficult to toast bread with a gun? And so technologies are not neutral in so far as they all enable different capabilities.

But then there are some technologies that really sort of, when you're trying to say what are they going to do? How do they straddle complicated lines, right? So the, the example then I give is the knife, and you're nodding, or it's, it's the knife, is it a weapon or is it inherently a weapon or not? Well, it is designed to enable cutting flesh, right?

And, and so. I see a lot of developments with new AI capabilities as being in this category of they're going to enable things and it is difficult to predict exactly what they might enable, and it is worth thinking really hard about, right? What can we design in to make something enable more of what we want and less of what we don't want?

Optima is really interesting because. They're basically, they're trying to generate these ultra efficient AI chips that will minimize the need for as much power mini, you know, won't generate as much heat waste that could enable a bunch of different capabilities in all sorts of directions, right? And so it's, it's very much in that category of, oh, I'm not gonna call it neutral, but it's gonna, there's gonna be a lot of, depends on what we do with that and, and how we frame it and, and.

You, you talked about how it might be perceived by adversaries. Well, what's the flip side of not doing that and thinking through those elements?

Kevin Frazier: Just as a disclaimer to listeners, please do not try any toasting with any sort of gun. Let's just leave that off the table for now. Very good thought.

Experiment. Let's not try that in practice.

Rebecca Crootof: Maybe you should be a torts lawyer. Yeah. You know.

Kevin Frazier: I, I do love a good torts, hypo. I, I envy my professors who get to come up with torts, hypos, because those are, those are fun con law hypos, less fun but you know, to each their own. So, one thing I'm really curious about is you have written previously that DARPA pursues quote mind, bogglingly implausible.

Research. That sounds really fun. What are some projects that you can publicly share that have just blown your mind? What, what is just that science fiction land for you that we are actually realizing in the real world at darpa?

Rebecca Crootof: I'm pretty sure that quote was in, had a fake cough before and after it, but Well now we've got it in here.

I've

Kevin Frazier: added it in. We're good now.

Rebecca Crootof: So, oh, this is a frustrating question to try and answer in part because so much of my work, I've, I've been at DARPA now six months and the vast majority of my time there has been working on predecisional programs, which are the ones I'm not allowed to talk about yet, because they are still still taking shape.

Kevin Frazier: Judge. Worth asking though.

Rebecca Crootof: It was worth asking. And it's not that there aren't a gazillion fascinating ones, but I would say one thing, there's a whole category of programs that I do love and this is coming from, you know, wearing my lc hat that are thinking about how do we reduce. Byproduct, how do we reduce waste, right?

How do we reduce waste for microchips? How do we reduce waste for fertilizers? How do we reduce waste, you know, for like in all these different areas, you know, heat waste, human waste, other kinds of waste. Or the flip side of that is how do we then do something productive and useful with that waste?

And so this very much appeals to me 'cause I think of all of these programs as inherently. Lc programs. So you mentioned like where in the process does lc come in? I think there are whole programs that are motivated by what I would say are lc considerations. And so this whole category of programs that are focused on minimizing or reducing waste or reusing waste, finding a way to make that way something productive.

Those all of those programs are animated by lc interests. And then there are whole other categories of ones that are all about. Minimizing cybersecurity risks, cybersecurity harms, all, all of those are animated by lc considerations. There's this one fun one. I can't remember what the acronym stands for.

ICS. That is just about, okay, as we get into this world of augmented reality. How do we reduce the risks associated with augmented reality? And that includes both the cybersecurity risks, but also the fact that if certain systems are hacked, they can be used to induce nausea, right? They can have physical effects on their wearers.

And so how do we build in safety and security to these systems from the very beginning of how they're being designed and deployed? And I had a great conversation with the program manager on that. That program was, was off to the races before I began. Where I'm like, this is an ELSI program. And he was like, I never thought of it that way.

I'm like, no. The entire point of this program is, you know, ELSI issues. And, and so I think there are just whole categories of, of DARPA programs that are animated by trying to address problems and solve problems. And those are fun to think about. How do we, how do we expand those and, and. Have something that addresses a problem maybe over here for a war fighter over there for a civilian.

I guess I should probably give a shout out to all the work that the biotech office is doing on this front. With, I think there has been some fun stuff in the news about their smart bandages and when I, when I started here, my husband, who's a doctor, was like, wouldn't it be great if they could make dehydrated blood?

And then we looked it up and we're like, there is a program to make de There's a program for it. Yeah. Put a bird on it. There's a program for it. That's great. So I'm, I'm having a hard time answering your question about coolest DARPA program because my mind just fls from this one to that one to this other one.

Kevin Frazier: I believe it. I believe it for sure. So some folks might be listening to this podcast and thinking. Wow. I'm pretty intrigued by joining the government and taking on these important questions and lending my expertise. How is the process itself of getting clearance, getting up and running at a place like darpa, integrating into this government bureaucracy?

Should our listeners say, ah, I should start Googling some other fellowship opportunities, or perhaps would you say, you know what? Stick with the day job. It's easier. This is just a little, little too complicated. Maybe the juice isn't worth the squeeze

Rebecca Crootof: if I've not adequately conveyed how fascinating this is, right?

Like if you can possibly think the juice isn't worth the squeeze after listening to me, then something is not coming through this medium very well. But I. There are how to, how to get involved. Well, I will say, like from my personal experience being the first one, it's definitely been a, okay, let's figure this out and let's figure that out.

Um, and hopefully, you know, created some precedent that is going to be very useful for the next lc visiting scholar. But plan is to have a, you know, we're, we're. Interviewing right now for the 2025 lc Visiting Scholar. I very much hope that there is a 20 26, 20 27, you know, and so on and so forth. Lc visiting scholar.

So if you're interested in taking a year of your life to learn about programs and ask interesting questions, I highly recommend looking into applying for it. But it doesn't need to be a, you know, take a year of your life for it. As I mentioned, we're, we're looking, we're working with a bunch of outside subject matter experts, and I've trained up a bunch of individuals and bringing them in to be consultants on particular programs, and they have the ability to ask questions I never would have known to ask, and that is incredibly useful.

Sometimes I, yeah, sometimes ignorance is useful. Sometimes knowledge is really, really useful. The nature of this role, right? This nature of being able to walk in to a conversation and ask questions, you know you are never gonna catch everything. And, you know, it's impossible to think of critically important questions.

And that is both humbling and freeing, right? Because it allows you to say, well, at least I can ask this one, and at least it's useful to ask this one. But the, also the corollary to that is how important it is that it's not one person. That it's everybody asking these questions because everyone asks different ones and, and it's a little bit of the Swiss cheese model of accident prevention, but one of my long, my longer term goals or hopes is that.

These types of questions become more and more second nature, not just at darpa, but that it percolates out through the greater r and d ecosystem. And it's not just an ethicist coming to one computer science class being like, you should sometimes think about these things, but that it's something that everyone thinks about regularly.

We are looking to have diverse experts and so they should really feel free to reach out. They, we have an email address. We're, we're firmly in, you know, the late nineties level technology for that. So developing drones.

Kevin Frazier: But also send us a fax.

Rebecca Crootof: Yeah. So ELSI, ELSI at DARPA dot mil, and if you're interested in potentially being a subject matter expert who is an lc advisor for particular types of programs, let us know.

Kevin Frazier: Well, we will go ahead and leave it there. Thanks again, Rebecca.

Rebecca Crootof: Thank you.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja and your audio engineer this episode was Nome Osband of Goat Rodeo. Our theme song is from ALIBI Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof's primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberspace, robotics, and the Internet of Things. Work available at www.crootof.com.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare