Cybersecurity & Tech

Scaling Laws: How AI Can Transform Local Criminal Justice, with Francis Shen

Alan Z. Rozenshtein, Francis Shen
Tuesday, January 13, 2026, 10:00 AM
Discussing the intersection of neuroscience, AI, and criminal justice.

Alan Rozenshtein, research director at Lawfare, spoke with Francis Shen, Professor of Law at the University of Minnesota, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney.

The conversation covered the intersection of neuroscience, AI, and criminal justice; how AI tools can improve criminal investigations and clearance rates; the role of AI in adjudication and plea negotiations; precision sentencing and individualized justice; the ethical concerns around AI bias, fairness, and surveillance; the practical challenges of implementing AI systems in local government; building institutional capacity and public trust; and the future of the prosecutor's office in an AI-augmented justice system.

 

This Scaling Laws episode ran as the Jan. 16 Lawfare Daily episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alan Rozenshtein: When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's not crazy, it's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws,

Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it?

Alan Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions, have to be figured out and—nobody came to my bonus class! Let's enforce the rules of the road.

Alan Rozenshtein: Welcome to Scaling Laws, a podcast from Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare.

Today I'm talking to my University of Minnesota colleague, Francis Shen, professor of law and director of the Shen Neurolaw Lab and who is also running to be the next county attorney for Hennepin County, which includes the city of Minneapolis. We discuss how AI can transform local criminal justice from investigation clearance rates to precision sentencing, to mental health interventions, and what it's like to run for public office with AI as a central plank of one's campaign.

You can reach us at scaling laws@lawfaremedia.org and we hope you enjoy the show.

[Intro]

Francis Shen, welcome to Scaling Laws.

Francis Shen: Great to be here. Nice to see you, Alan.

Alan Rozenshtein: So, Francis, so you and I have been colleagues at the University of Minnesota Law School for nearly a decade. And you're an expert on law and technology and the criminal justice system, including AI. But you're also running for office for the Hennepin County attorney kind of on this AI platform, and that's part of the reason I wanted to talk to you.

But before we talk about, sort of, the political and policy angles of this. I just wanna start by getting a little bit of your background and starting with neurolaw, because that's really your—how you started as a legal academic, and still a lot of what you work on.

Your lab has the motto: every story is a brain story, which I love. And the American Law Institute recognized you as a pioneer in establishing this interdisciplinary field. And you've been working in this for a long time, including in the criminal justice context, I think at least, you know, all the way back to sort of 2010 and things you've written about.

So, what were you seeing 15 years ago that others missed? And how does approaching criminal justice through the lens of brain science and what does that even mean, inform your thinking about AI in the legal system.

Francis Shen: Oh, great set of questions. So let me give you the short answer and we can pick up on any of it.

I went into graduate school, I did a JD-PhD, and of course the JD was in law. The PhD was in the social sciences, government and social policy focus, economics, political science, a lot of sociology, so all your traditional social sciences; the fields of study that look at human behavior. But one of the things that I found was I really didn't understand the “why” of people were doing things. That took me to psychology.

And that took me to neuroscience. And in particular, I was looking at legal and policy responses to survivors of trauma. Both those, that was at the time 2001, returning soldiers from then the wars in Iraq and Afghanistan that followed soon after, and also survivors of rape and sexual assault. And in both cases there were a number of people who put it in quotes, for a reason, physically okay, but really weren't.

They were dealing with a lot of mental stress, a lot of PTSD, a lot of mental trauma. And the social sciences talked a lot about how to respond, but they really didn't give me any sense, and others of, again, like why is it so difficult if the arms are working and the legs are working and why is it that some who experience the same thing are able to kinda move past it and others are not?

That took me to psychology and then that was a little bit better, but still left me understanding—I was like, there must be some reason, right? That mean it—why? And that took me to neuroscience. And I got to neuroscience. I owe everything actually to a cognitive neuroscientist, cognitive psychologist named Steven Pinker, who became my mentor, joined my dissertation committee.

And by the end of graduate school, I was realizing that I really wanted to go into this zone of neuroscience and law. And just lo and behold, there was a MacArthur Law neuroscience project that had just opened up. They needed a postdoc. I was a pretty good fit, and off I went to Santa Barbara, California drove cross-country and the rest is sort of history.

But that's the origin story that I was really looking at survivors, victims of trauma. But once I got into brain science, this kind of answers your question, I began to see that all the things I cared about: why did a criminal do the things that they did? Why does anyone do anything? How is it that I am feeling? Whatever I'm feeling inside? How is it that I'm talking to you and responding?

All of those things are brain related and in fact really deeply brain-related, and I've spent the rest of my career trying to think about law, which is in the business of governing human behavior and understanding and improving human mental states, how can neuroscience, which tells us a lot about those mental states and a lot about how we make decisions, how can it improve law?

And in a nutshell, that's neuroscience and law with one very important asterisk. And this will get us to AI as well, and that is law works with lots of other disciplines to understand human behavior.

Law and economics would be the classic example, ‘Hey, you want to change tax policy? Let's talk with the economist to understand how a change in tax policy might change behavior.’ You don't need an FDA to regulate economists because they're not creating tools to directly modify mental function.

They're changing things outside in the environment that will change human behavior, but not things inside the body. Neuroscience offers not only new knowledge, but also new tools, new neurotechnology that can directly and indirectly change brain function and therefore behavior. One of the major challenges in brain science is that the brain is horribly complex.

One of the two caveats that I give about our motto, “every story is a brain story,” the first one is that every story is not just a brain story. So, we should still think about economics and sociology and religion, all these other things that should matter.

But the second is that every story is a not fully understood and sometimes poorly understood brain story. There's one brain scientist who says, if understanding the brain is like running a mile, we've come three inches. I mean, it's so, so complex. 86 billion neurons, hundreds of trillions of connections, the most complicated thing in the universe.

Now let's see, what if there were tools to take tons and tons of data and make sense of it?

That's how I got into AI, you know, well over a decade, 15 years ago it was really through brain science. There's another connection as well, which is there are lots of different types of AI and we may talk about some of them, you know, factory robots—very important, fascinating. But the subset of AI that fascinates me the most and gets me most excited is the AI that is in one way or another, trying to either augment or replace or modify our human information processing.

And a neuroscience view on behavior and on the law starts with this foundation, the way that we produce anything is through information processing and that includes emotions. We've got sensory organs, we take in the world, we process that and the brain, in communication with the rest of the body, and then we do something or think something or feel something.

That's everything. That's the moment somebody says, I love you too. That's the moment says, I hate you. That's the moment said, great, I did great on this calculus exam. That's what our law students do. That's everything. And artificial intelligence is another way of processing information non-biologically. And it can be integrated with humans, it can be separate from humans.

But those are the origin stories, both how I came to neuroscience and law, and then how I came to AI a number of years ago.

Alan Rozenshtein: So when you were talking about AI, back when you were focusing on neurolaw, you know, back when you were writing about this in 2010, what did you mean by AI then?

And contrast that to the extent things have changed with what you mean by AI today, when you think of the applications of AI in particular to criminal justice.

Francis Shen: So back then we have—so we have a thousand-page law and neuroscience casebook, and we first published it in 2014. We first drafted it in 2010, and that's when we had our first draft chapter, the AI chapter.

And I would always tell the students when I taught it, this is probably the most important chapter in the book, even though we put it last and we put it last 'cause it's so future looking. Then we were thinking a lot about the following: brain machine interface, which by the way has taken off, there was no Neuralink then, Elon Musk, I don't even know if you had thought of the idea yet, but there were lots of others.

So brain machine interface, robotics—all kinds of robotics, so we would show, I, you know, I was showing factory robots, things. And then cognitive enhancement, super intelligence, merging—and that's kind of brain machine interface, but in, in other ways, and as well as super intelligence you know, generalized AI, what is AI.

All those questions are still really pertinent. The number one type of AI that did not exist then and was not in our purview then, but is now, AI large language models. Now versions of it were, I taught the first law and AI class here in Minnesota, and even then you know, we had folks coming in from Westlaw talking about the AI systems they were developing to improve search.

And the early law and AI work, as you know, I mean, goes back decades and folks were trying, were always trying to figure out can we have basically a robo-judge? Is there a way that this system will be able to delineate legal principles and apply them? And so that stuff's been in the ether for a while.

But I'd say, those large language models are sort of the thing that really, I mean, machine learning was there. It was already a factor. The ease and the access of these new LLMs is, I think, has really changed the field. I know you agree. You're right on it. You're already at the cutting edge also, but that was not something we were writing about back in 2014.

Alan Rozenshtein: And so today when you talk about AI, right? When you go on the campaign trail, you're talking to the students or you're writing and you're thinking, we should use AI to improve criminal justice. You know, are you talking about ChatGPT, or are you still talking about more sort of lower-level statistical machines?

I mean, obviously there's no bright line between all of these different tools. But I'm just curious because AI can mean so many different things. You know, what are the things that you're sort of focusing on the most? And we can sort of also get to that in our conversation.

Francis Shen: Yeah, I think about a whole bunch of things because I have a definition of AI that’s really broad. In fact, the first session of law and AI when I teach it is “What is AI?” And we read some essays where basically say, you cannot define law and AI, so you just have to pick the ‘we fake it, we describe it.’ We kind of have different categories, but we don't—

Alan Rozenshtein: My favorite definition has always been ‘AI is anything a machine can't do.’ And then once a machine can do it, we just stop calling it AI. Then it's just, you know, well, obviously, which obviously a machine can do that.

Francis Shen: Which is fair.

And I also cheat by just going up on the board and saying, all right, is the spell check on your Microsoft Suite of things AI or not? And then, you know about half the students say yes, and others say no. And we go through the calculator and other things.

And the reason I start there is that, I don't view—we'll come to you know, hard AI it may be a little bit later. But let's set that aside for the moment.

I think everything else is variations on a theme. And that theme is, this is information processing that humans could do, but that, for either efficiency reasons or we just don't wanna do it reasons, we should have the machine either do it entirely or help us do it. And that would include to me calculators, you know, I don't need to know 379 times 2,022 'cause I have a calculator that can do it for me.

And can do it much better than I can, as well. So with that in mind, the sort of more advanced AI that are on everyone's mind would include yes, use of large language models. But more than that, actually, I think the real goal here is to develop algorithms and predictive models that will, and you know, those may actually may not include AI, but AI perhaps infused predictive models that will help the system make very difficult predictions, and in particular predictions about, okay, almost everyone who comes through criminal justice system in the United States is going back out into community.

Almost everyone, life sentences are super, super rare. They're the ones that get in the headlines, so we may think they're more, but they're pretty rare. Even longer sentences, although we have a lot of them, are relatively rare. Most folks who come through are going back out. So what intervention should happen while they are in the system?

You've got a whole range of interventions. Well, this is a mass scale production system. Just in Hennepin County, for example, you know, over 10,000 cases a year and other larger cities have, even counties, have even more.

And the way we do it right now is basically based on precedent, right? We have a system we in place. We have a set of guidelines in place that are using like 1980s math. And we do that, and we never check the outcome, meaning did this person succeed in the world? Did they—were they better off or worse off after being in our, in the criminal justice system?

And that to me is a really excellent place to think about using AI. Now to do that, we have to rethink the system because we don't have good data. The old adage applies, garbage in, garbage out, and I think even worse than that, bad data in, biased outcomes out, really problematic outcomes out.

And we can get to that too, like what data would you need, but that would be to me like the overall most important way to use AI, which is, ‘boy, this is a really large number of people coming with complex backgrounds, and how do we at scale, try and optimize what's best for each individual?’

And the answer has been, right now, we don't. Because it's too tough. So we don't individualize. We just kind of put people in buckets. I think the processing speed of AI, the ability to give it a lot, would, and then checking it, would allow us to really help these humans in the system.

Now there are a lot of other ways just in legal practice that are not unique to, you know, criminal justice, but that are already happening. So, both defense attorneys and prosecutors are utilizing tools to help write their briefs. That's already happening. I mean, judges years ago I that I know were using WestAI to help check their opinions. That's already happening.

And I'm sure you know, your research are sort of already covered some of the challenges there, hallucinating citations. You know, what is the balance between ensuring that lawyers can still do some of that work on their own versus what can be outsourced? I consider that actually a fairly non-controversial, but important use of AI in all of law. I mean, if we're not doing it, it's legal malpractice, but if we're doing it wrong, it's also legal malpractice.

You need new ethics training. But that first thing I mentioned is really new.

Alan Rozenshtein: So that, that's great and I wanna dig into a lot of these, sort of in turn. But before I do that, I wanna ground this a little bit in the specific context that you're operating in, which is this election for the Hennepin County attorney.

And the reason I was particularly excited to talk to you was because, you know, often we talk about AI policy issues at a very high level of abstraction or at the national level, or even at sort of the state level. But at least in criminal justice, the vast, vast, vast majority of criminal justice happens not even at the state level, but at the local level.

And so, I think really grounding this in the context of a specific local county is really helpful. But for those of our listeners who do not have the fortune of living in the great state of Minnesota, maybe you can just say a few words to kind of contextualize what is the Hennepin County attorney and what are the criminal justice responsibilities of a position like that.

And then once we've established that, we can talk about how you think AI should plug in to the specific functionings of this part of the criminal justice system.

Francis Shen: Good question. So, broad strokes, we have federal criminal laws and state criminal laws. And if you're violating either of them, you could be prosecuted, but the vast majority of prosecutions come under state laws.

So, Hennepin County is one of the counties in Minnesota. It includes 45 different cities. The city if you're not from the area that you'd know most would be Minneapolis, but it stretches out to include places like Saint Bonifacius, which is more rural. And it includes the most expensive homes ever sold in this state, are in this county as well, 'cause it has a number of wealthy suburbs.

So, it has a range of communities, people living in here. And each city also enforces both their ordinances, and the way we divide the work is city attorney offices also handle crimes that do not rise to the level of felony. But if there is a felony level of crime in the county, then it comes through the county attorney's office.

And the county attorney is charged with prosecuting that crime. The county attorney also has a number of civil duties as well. So, for instance, they defend the county hospital if there issues, and do labor law if there's a whole civil side. But on the criminal side, that is that's the role.

And it's a big county. Over a million residents, over 10,000 cases a year are coming through that county. And relevant to the conversation, I'll just add one more note, which is that I think those outside the criminal justice system, if you've just seen TV and the movies, you think, oh, every case has a trial and there's a judge and a jury.

No, 97% of cases have what's called a plea deal. They're not going to a jury. The prosecutor and defense attorney are getting together, negotiating an agreement about what will happen to the offender, and then a judge approves it. So that's kind of the machinery of what happens. That's the geography and that's the task of the county attorney.

Alan Rozenshtein: Okay, that's great.

So, now let's, and you've mentioned a little this already, but I kinda wanna go piece by piece. Maybe one way to think about this is the sort of life cycle of a criminal justice event. And we can sort of think about, let's get your thoughts on sort of what role you think AI can play in that.

So the first thing that happens is a crime. And then it has to be investigated. And so, you know, you, I know you've talked about how unacceptable it is that at least for certain categories of crimes, the clearance rates are so low. And I'm curious how you think that AI, or what role AI can play, in getting those clearance rates up.

Francis Shen: Yeah, so for those who don't know, a clearance rate is sort of—a clearance rate is sort of, did we get the bad guy statistic? And nationally, those rates have really gone down. They were for homicide about 90% in the 50s. They're about 60% now, nationally. In Hennepin County, 68% for homicide and robbery, rape hovering around 20%. Auto theft 1%. So if your car's stolen, good luck. You can get the car back, but you're not gonna find the individual.

Alan Rozenshtein: But just for homicide, just 'cause I do think it's useful to sort of flip that statistic around. So 68% clearance rate means that 32%, a full third of murders, are never solved. I mean, is that, I mean, that's a horrifying statistic, putting aside AI.

Francis Shen: Yeah, I mean, it's 80% of rapists aren't being brought to justice in this county, and that's not unique to this county, unfortunately. One of the reasons for that is the changing nature of some of these crimes. It's also the case that it's a scale issue. So it's not that those auto thefts, you know, at 1% are master thieves and master criminals that could never be caught.

It's that we're not gonna exert very limited resources on an auto thief when you've got violent crimes and everything else down the line.

But let me give you some concrete examples of things that can be done. At the investigation stage though, I would then—we can go back actually even before that, because ‘prevention through prediction is better than conviction’ is one of my mantras.

And I'll talk about that. But once something happens, the way we typically do is it has to be observed. Observation can come through different ways, depending on the crime. But one of the typical ways is through eyes of a human, right, that's with maybe aided by a radar gun if it's speeding or by witnesses. And especially increasingly by video.

So there is already one of the cities in Hennepin County that uses an AI intelligent camera called Acusensus. And it's on one of these big highways where people are driving like 50 miles per hour. And what they're doing is they're taking their cell phones and they're both hands are off the wheel and they're looking like this, okay, at 50 miles per hour.

If you've got a little group of just six officers, you can only get so many of those folks. But this system is able to identify, like, take these pictures and identify those, who are basically doing this in clear violation, get that information immediately to an officer who can pull over within, you know, 20 seconds, I think the statistic is and then determine if a violation has occurred.

I think that's a really effective, and most of that's, it's a citation. It's a warning. We're not talking about, you know, incarceration. And that wouldn't rise to the felony level, but it's an example of the type of, I think, really useful, important, and productive AI uses. What's it doing? It's augmenting our information processing.

It's seeing things that we can't see. It's processing it faster than we can process it, and it's doing it at scale. It's keeping a human in the loop. And it's not I think, you know, draconian, that's one of the concerns.

Another example, actually, I'll use the example of what happens if the car is stolen. So somebody parks eight o'clock at night, they park the car in the street, they go to sleep, they wake up, the car's gone.

Well, right now to investigate that thing, we've got very limited investigative resources. You're taking a human who has to sit through 13 hours of video, right from the security cameras or the Ring to find out, you know, when did it happen and see if we can identify who it is. It is a perfect place to utilize AI. So, and there are others as well.

So these are places, I think just at the investigation stage, at the early stage where we could really make useful use of AI. Now, I'll say right here, because it happens here and throughout. Yes, there are all sorts of ethical concerns.

This, you know, that when I talk about it just out in the community or in class, it's like the first hand that goes up “Isn't This Big Brother?” And the answer is—

Alan Rozenshtein: That was indeed gonna be my question.

Francis Shen: It depends what you mean by ‘Big Brother,’ and the answer is no, this is not George Orwell, 1984.

It could be, it certainly could be. But I think of it more as just trying to be in touch with our neighbors. We don't think it's ‘Big Brother’ when someone, if they were in the seat next to you, looked over and said, ‘Hey, put your hands back on the wheel,’ right? That's not Big Brother. That is a helping hand that's paying attention to someone.

And in fact, for many who can afford it, there is an increasing interest in sharing information for better health outcomes. These are smart rings and smart watches and all kinds of devices that are communicating, gathering data, AI is being used to analyze that data and then it's being sent back with the idea that, wow, better information, better observation and understanding of my data can help me live a healthier life.

And so I think this data can be used for the good but it could certainly be used for the bad. And that's why you've gotta have, you know, the right ethical guidelines in place.

And that's a lot of the work of our lab and others, you know, many others doing this now across many disciplines, whether it's AI and health, AI in the law, AI in business, to think about, how do you take advantage of the fact that this little unit that I mentioned out in south Lake Minnetonka, they're able, they were able to detect 10,000 violations in, you know, one month alone.

And there's no way they could have done that without this technology. And I think that's a good thing. I think it's reduced distracted driving, increased in lives saved, and decreased in harm done.

Alan Rozenshtein: So that's the prevention, and obviously we could talk a lot about all of these issues but you know, time is limited, so I wanna sort of keep marching on, that's the prevention slash investigation side.

Then you have what we call the adjudication side, so the actual, you know, bail to jail, right? As we say in, in law schools. And I'm curious what role AI can play or should play for the line prosecutor, right, especially at the local level who is often given, you know, a stack that's bigger than she is, of file folders to get through.

I'm curious if you think there's a role for AI to play, and in particular just to kind of front load the, I'm not sure it's an objection, but a consideration, what role there is for the human in the loop, because you already mentioned this and so maybe it's an opportunity for me to ask the question.

You know, I feel like in a lot of AI conversations, generally, you often have this kind of incantation, but of course we need a human in the loop, and sometimes people really mean that. Sometimes people are just saying that because it makes people feel better. But the question of when should you, in fact, have a human in the loop?

When is the human doing good versus causing problems? Or when is the human there to actually intervene in the loop versus the human, essentially rubber stamping, we need someone to blame. We need a carbon-based life form to blame, not a silicon-based life form to blame.

And I think, you know, we could talk about that in, in many different contexts, but I think the line prosecutor negotiating the plea deal or occasionally going to trial, it sort of was a good opportunity.

So just riff on that set of questions, if you will.

Francis Shen: Yeah. A great set of questions. At a high level, I don't think we, you know, across all areas of use of AI, I don't think we always want a human in the loop. I think it's an empirical question. But it's also a procedural question as well. It's a combination of two.

To me, the bottom line is what outcomes do you care about most? And does utilizing the AI improve your outcomes? The outcome I happen to care about most is community safety. And that includes feeling safe and that includes being very responsive, actually, to victim and community views.

So here's how I think about the, you know, the basic decision, and you're absolutely right to describe what is a mass production process. Prosecutors have very limited—judges and prosecutors and defense attorneys—all have pretty limited information.

When this process starts there's a police report, might not be that extensive. If there are prior charges, you might have something there. You know, you would know if there was interaction with the system beyond that, not a ton. So, you've got all the complexities of this individual there, and yet you've boiled it down into, you know, what is a few paragraphs, a few pages, a little bit of information.

Yeah, that first decision about, you know, a bail decision is an important one, but so are the charging decisions that sort of set things out and I see there the ability, eventually, for a more efficient information ecosystem. There's just a lot more information for the prosecutor.

And here's to me the reason that it could work. When you're seeing 10,000 plus cases a year and even more are referred, very few of those individuals are, ‘oh, I've never seen anything like this before.’ And in fact, most of them are questions that lead to how do we handle these cases? What's our policy on these, right?

Because they're repeat, not the individuals necessary though sometimes with recidivism, the actual individuals there, but it's the repeat type of player, right? And so, what makes the system work now, is everyone has a roughly agreed upon idea of how we treat cases like these.

We, oh, first time DUI, this is what we do. Everybody knows the standard. Everybody knows what this office does. The judge approves it. We're done. Right. It's a, it's an efficiency.

And then, you know, we've got different tweaks. Oh, this was a little, we're gonna go a little lighter here, a little harder there.

It takes the individual out of the system and the sentencing guidelines, which are important to mention, because they really are the backdrop against which that, at least in Minnesota, which is a guideline state and many states are happens. So, for those who don't know, sentencing guidelines were instituted to take the individual out because of concerns about disparities, racial disparities, in particular, in sentencing.

Prior to guidelines, there's discretionary sentencing. Judges are sentencing every which way. And it turns out that in the aggregate there were racial disparities. How do you handle that?

Well, let's take that discretion away and sentencing, and instead we're gonna ask two questions, and there's a grid. We, one line of that grid is how bad is the crime? Like how much harm? Murder at the end, different types of homicide and can go down the get less from there. And how many prior offenses, how many bad things have you done before? And add those two, and here's the box.

And you can depart from it, but that's basically the box. It's simple, it's information, but I think it's not nearly rich enough and it's not individualized.

So what I see is a world in which you would be gathering individualized information again and again and again. And then we begin to ask, the analogous case is not just, oh yes, this is a person who has two prior convictions, and here's the thing—that's what we do now. Instead, we'd be able to say, ‘boy, the system is suggesting this is a person a lot like this, and what worked for that other person was this. What didn't work was this. Let's go with what did work.’

It's the equation. I've called it precision sentencing before, and it's kinda like precision medicine and that's where medicine is heading as well. My brother's an oncologist and his whole work as a physician scientist is rather than just say, all right, we're gonna treat everyone the same, sling chemo at you; we're gonna try and figure out what your particular biology is and come up with a treatment regimen that's more likely to work. That's the basic idea.

There's no way you can do that without big data and a sort of, I'd say AI-infused, though how much AI actually, I think is unclear, AI-infused system to try and help. Again, still decisions need to be made in this case for humans because there are other factors that matter that the system may not pick up.

But that's a very, very different system than the one we have right now.

Alan Rozenshtein: So that actually nicely leads then to what we might think of as the end state of the criminal justice system, which is once people are, let's say incarcerated or once they're convicted and they're incarcerated, they're now in the system.

And then a whole set of issues comes out, especially at the state level around, well there's bail on the front end, but then really parole, early release, that sort of thing on the back end. And I think this is where AI systems or algorithmic systems have had their most work already and have been, I think, quite controversial in a lot of sort of interesting ways.

There's the, probably the standard example people cite to is, the COMPAS system for, I think it was a pretrial release system from several years ago, and concerns about whether that system was accurate or whether it was racially biased, or whether it was both accurate and racially biased.

And so I wanna actually ask you a version of that question, especially given that you said that your priority is around community safety.

One of the concerns around AI systems, I mean obviously there are some AI systems that are bad systems. You have crap data, they're poorly designed. Fair enough. That's really bad. But in principle, those are fixable problems.

But it strikes me that the deeper concern about AI systems, and I don't know how one fixes this, is that some AI systems work very well, in the sense that they accurately predict the thing you are asking them to predict, but they are doing so based against a background social reality that we might object to on sort of other normative terms.

Right? So, I mean, very abstract here, but the general idea might be: If you have a society that's, let's say unjust and unequal in some respect, and it causes some particular group to have worse outcomes, well that group's gonna have worse outcomes. It may in fact even engage in worse behavior, which is downstream from those worse outcomes.

So then, when you ask the AI system, ‘Hey, I need you to predict whether this defendant is gonna do this bad thing, I should, whether I should release them, whether I should give them parole.’ The AI system might give you an accurate answer, right? It actually might be accurate. And it also might be, on a particular view of the idea of bias, biased.

And so, I'm curious how you think about that sort of as a conceptual question, you know, should we even think of that as bias? Is that really the right word for it? And then maybe more importantly as a normative question, you know, which you might think of as sort of a safety versus fairness thing.

Because that to me is where the sort of this rubber hits the road. You know, what if AI actually works? That's its own set of concerns, or one might be concerned about that.

Francis Shen: Yeah, it's a great set of questions. So I think it depends what you ask the AI to do. The COMPAS, which didn't use AI, but is one of these, is an algorithm, an equation, right?

And it, like other risk assessment tools, of which there are many, and there's been a lot written by our, you know, law professor colleagues on the various risk assessment tools, they typically focus on recidivism. How likely is it that this person is gonna do another bad thing?

There's no reason that you couldn't instead have a system that was asking, how likely is it that this person is going to thrive, that this person is, which is related to safety, I actually think very much so.

And you also could have a system—instead you ask, I've got three treatment options. I've got three real rehabilitative options. What's most likely to work? What's the best to pair here? So I think it takes more creative imagination about how these tools are used.

And the majority of the risk assessment tools are, I think, focused on the, you know, again, the likelihood of committing a bad thing again. And that's an important thing to consider. So that's one thing.

Secondly, I do think it is possible that, and we already have this actually, whether it's AI or just you know, an algorithm, that fairness may not mean that everyone gets the same the same outcome.

And that's a normative, you know, position that I take. It's something that's the reason I don't like the guidelines. A benefit of guidelines is that, hey, no matter who you are, you get the same, you get the same outcome.

I really like the idea of individualizing because I think it's better if you have enough information for everyone. I'll give you a concrete example, which doesn't require AI, but does involve screening and a little bit of neuroscience, and that's first time DUI.

The first time DUI, in most jurisdictions, there's just a standard thing that happens and it's usually pretty lenient. Lose your license for a bit, you know, no incarceration, some fines, you've gotta go to some classes about—to tell you to don't drive drunk.

But we don't ask: Why were you driving drunk last week? Is it because you just made a dumb decision? Okay. Or is it because you are an alcoholic or you have some sort of other addiction, and this is just the first time you've been caught?

Those are two really different people that we should treat those very differently. And the kicker is we have tools that can separate roughly those two kinds of folks out, but there's no incentive to do it.

So the reason I mention that is that any AI system, any algorithm in the system is gonna be reliant on the data it has available. And I think the reason that COMPAS and those other recidivism tools have be, you know, are used is because that's the one point of data we have.

The one thing the criminal justice system knows is if you come back in, right? That is what fuels those data, if you're arrested, again, we don't track your wellbeing in the community. We don't know, ‘Hey, you spent two years incarcerated, you're back out, did you then finish the schooling? Did you do this? You do that?’ We don't know.

And because we don't have that type of data, we haven't built these other types of models. And so I think it's in part a lack of imagination, a lack of will.

But that's a real, it's a real issue. And one that, you know, if I'm building the architecture of a AI-informed justice system, the key to unlock it has gotta be measurement of real-world outcomes that aren't just, ‘Did you come back into the system?’ That's one of them, but that's probably not even the most important one.

Alan Rozenshtein: So let's assume, for the rest of the conversation that AI has a really important role to play in the criminal justice system across these different areas. So the next question becomes, okay, well how do you implement that at the local level? And I'm actually very curious about this kind of institutional bureaucratic question.

I think that frankly, the government's, you know, incompetence at technology is frankly quite overstated. I think the government often does a great job with technology and there are a lot of really smart and committed technologists, you know, not just at the federal level, but at the state and local levels.

At the same time to do this, I think at the level of scale and sophistication that you seem interested in doing it at is a big lift, right? It requires a lot of systems. It requires a ton of data. It requires a lot of buy-in from, you know, a lot of cops and lawyers who may not be that interested in technology, right? Like they, they did not you know, you and I are nerds, but not everyone shares our particular affliction and interest.

And so I'm curious how you would imagine going about doing this. And what are the challenges that you might that you might for foresee, and lemme just give you maybe a concrete concern just to start with and then you should sort of, riff more generally.

How much of these systems can be developed in-house by the government, you know, especially by local government versus how much of this is gonna be proprietary and from the private sector?

Not that there's anything in particularly wrong with proprietary software, right? I mean, no one expects local governments to come up with their own Microsoft world alternative, but especially when you're dealing with somewhat inscrutable AI systems and algorithms, one might be especially concerned, right, if they're being developed and sold by profit-seeking companies.

There are a bunch of other issues we could think about, but I'm just sort of curious to start with that one and more generally get your take on, how does it actually do this? You know, if you get in office on this platform.

Francis Shen: You've gotta build trust.

That is the number one thing, because the cultural resistance is already growing. It's been one of the interesting things, talking in community.

There are a lot of misconceptions, a lot of misunderstandings and a lot of various, you've raised some good questions and others do as well, some really well-grounded fears about how different types of AI could be misused.

So there has to be a trust, including a trust with community. A lot of the work that we've been doing around AI in the medical space has been about involving those who are most affected in that case, it'd be patients. In this case it would be justice involved folks in the conversation.

Alan Rozenshtein: Well, well, let me, so, but the converse—lemme pause for a second.

'Cause I wanna get to that question, but I wanna ask a prior one first, which is before you go and try to convince, let's say the voters, right, which is gonna be my next question. How do you build this capacity inside the government apparatus itself? So that's the thing I wanna sort of start with.

'cause that seems like a thing people often gloss over. Not saying you're glossing over it, but in these discussions, I don't hear enough conversation about that institutional capacity building.

Francis Shen: Yeah. So let me rule out the Silicon Valley model of “move fast and break things and just do it all at once.” Show up Monday morning and here's how we're just gonna, we're gonna change.

I understand how that might work. In some sectors it will not work, and I don't think it should work in government. Certainly not in criminal justice. Second thing is there has to be transparency. So you asked a specific question about would it be developed in-house or with outside partners.

It's gotta be with outside partners, but only partners who are willing to be fully transparent about their work. One of the problems with COMPAS is that it was sort proprietary and it was never entirely clear, and that just had to be litigated, actually sort of what exactly are you doing? So it's gotta be really transparent.

I do think that with pilot—and so if you're not gonna move fast and break things and change everything all at once, what are you gonna do? You're gonna pick one, maybe two particular pilot programs as proof of concept.

It's gonna, eventually when we get to trust, it's gonna build trust. And it's also gonna just work out the very, like, practical challenges of doing this because it's not something that's been done before.

Now I will say that it does build on all sorts of other innovations in the justice system. I mean, it, you know, there's e-filing, there's this or that, people can come along. This, I think is different because it's not just another tool to sort of help you, human, do your job. There are potentially here some places in which it will say your job is actually shifting a little bit.

And we've talked about that in the pedagogy as well. I think the role of lawyers, the skills are still needed, but those skills are gonna need to be adapted. So you pick one or two—the practical answer is you start small, you pick one or two pilot projects that are really narrowly tailored, and where you think you have some decent data.

And so, for example there are some subsets of the justice system, some of our diversion courts and others, which have a much higher touch already with justice-involved individuals. Those might be places where you'd look and say, oh, you know what? We’re kind of already collecting this data. This is really, it's a really good place to, to kind of start and see if we can improve.

One of the challenges that is a real one is the lack of data sharing platforms across agencies and units where you would want to be able to do this quickly and sort of with as few transaction costs as possible. You can't change that.

You also can't change resource constraints, so this all has to happen within the budget, basically, like you don't get to add an extra 10 million for the AI budget. Of course not. And you've gotta prove some efficiencies, it’s gonna be more efficient in the longer run, even in the short run.

So that's the practical, that's the practical answer. And, I think that what will happen is, slowly but surely, through those pilot projects, you're building trust and you're proving that it can actually work. And that's been the case with law and neuroscience as well, is that it's not, you know, you suddenly don't show up everywhere, but you take a couple places where, like addiction's a pretty good example.

It's like, okay, hey, we have new treatments for addiction that we just didn't have two decades ago. And a lot of counties, including ours, are doing this now. You know, but at first it was met with like, what, like what are you talking about? Medical-assisted treatment for addiction. Why would you give someone with drug addiction more drugs?

I was like, here's why. And here, and more importantly, here are the outcomes. And I think the same thing has to happen here.

And I'll give it an example. Just like that example I gave about, you know, highway use of these sensors. That's, to me, a really good pilot example, right? It's contained, it's there. We’re not talking about implementing everywhere. It's like, okay, so what's the next step? Expand it. What's the next, you know, that, that sort of thing.

I think that would build trust, both in the community and crucially in the office.

Alan Rozenshtein: So I wanna end by letting you respond to the question about what it's like to try to convince the community of this.

Because again, you know, we're not just having this conversation in the abstract. You're running for office, you're out in the community, you're talking to people. It's a fairly crowded field. My sense is that, the AI lane is somewhat lonely in this field, and you've really, you really staked it out. And the other candidates are somewhat more sort of traditional in their how they talk about these issues.

And so I'm just curious, as you've gone to the community, are people interested in this? Are people skeptical of this? You know, I tend to be, you know, one, one of my frequent frustrations as a sort of soft AI optimist is that people either don't know about this technology or if they know about it, they're terrified of it.

And look, that's fair. People can think what they think, right? It's not their job to love this stuff. But you know, as someone who I think shares that sort of, at least soft AI optimism, I'm curious what your experience has been, again, not just as an academic, but as a politician trying to convince the democratic process that there's a role for this.

How's it been going?

Francis Shen: Thanks for the question. It's been going well. But I will say that first of all, you're right, I'm the only person talking about AI and criminal justice not just in this race, but I think in most races. As much as it's being talked about in some other sectors and business and in many other places, it's just not a thing that's being talked about.

And so that means that the first thing I do a lot of is just listening to concerns because people have heard of AI. Most of what they've heard is not great.

And in particular, I think that there are concerns around labor market, ‘Hey, is this AI gonna take all the jobs?’ There are concerns around environment, ‘Hey, is this AI gonna take all the energy?’ And there are concerns about bias, ‘Hey, is this AI gonna be, not just Big Brother, but Big Brother with a racial bias?’

And that's a three strikes and you're out count against the technology. But then I begin to give examples, and I talk about the ways in which some of these tools may really speak to the things that people do care about, which are, you know, better mental health in the community.

And there's a lot, we won't get to it today, but a lot of AI tools that are aimed to improve mental health. AI that can sort of improve the efficiency of a system that is so resource-constrained that it can't do all the things that it wants. So, I'd say it's, describe it as an exercise in both listening and communication, and also getting beyond the headline version or the, you know, social media post version of some sort of scary AI robot thing doing all this stuff.

I also would say this, and I say this all over the place, to not be deeply engaged in AI and the law, for lawyers, for this position, or really any kind of cohort of lawyers, at this point is legal malpractice.

We are not gonna look up in five years, in two years, in 20 years and say, ‘oh yeah, I'm glad that AI wave passed. We're done with that.’ Absolutely not.

What we are potentially gonna do is look in the same way we're doing with social media right now and saying, ‘I wish we would have done X, Y, and Z with social media. I wish we would have understood both its amazing potential, but also its potential harms and address them.’

So, you know, I was talking just last night with a group and, or two nights ago with a group, and said, the county attorney’s race, this is the first time ever, really first kind of cohort of elected officials across the board who have to have some knowledge of, an experience with AI.

Because it's not a question of “if,” it's just a question of “when” and “how,” and if we don't get it right it'll be really hard to put that genie back in the box and correct it.

So that line of thinking, I think really strikes people. And that's kind of where, you know, I'm framing it because I think it's the right framing. I'm not going in, like I said, it's not break things and just like, do it all tomorrow.

It's happening.

Are we gonna make it happen the right way? Are we gonna make it work for people in the community and for everyone, or not? And I think that that's one of the questions in this and a lot of other races this year.

Alan Rozenshtein: I think that's a good place to leave it. Francis Shen, thanks so much for coming on the show.

Francis Shen: Thanks, Alan. This was great.

Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad-free version of this and other Lawfare podcasts by becoming a material subscriber at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work lawfaremedia.org. You can also follow us on X and Bluesky. This podcast was edited by Noam Ozband of Goat Rodeo. Our music is from Alibi.

As always, thanks for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Francis Shen is a professor at the University of Minnesota and expert at the intersection of law and neuroscience, as well as law and artificial intelligence.
}

Subscribe to Lawfare