Cybersecurity & Tech

Lawfare Daily: Why AI Won’t Revolutionize Law (At Least Not Yet), with Arvind Narayanan and Justin Curl

Alan Z. Rozenshtein, Justin Curl, Arvind Narayanan, Jen Patja
Thursday, February 12, 2026, 7:00 AM
What are the bottlenecks preventing AI from reducing legal costs?

Alan Rozenshtein, research director at Lawfare, speaks with Justin Curl, a third-year J.D. candidate at Harvard Law School, and Arvind Narayanan, professor of computer science at Princeton University and director of the Center for Information Technology Policy, about their new Lawfare research report, “AI Won't Automatically Make Legal Services Cheaper,” co-authored with Princeton Ph.D. candidate Sayash Kapoor.

The report argues that despite AI's impressive capabilities, structural features of the legal profession will prevent the technology from delivering dramatic cost savings anytime soon. The conversation covered the "AI as normal technology" framework and why technological diffusion takes longer than capability gains suggest; why legal services are expensive due to their nature as credence goods, adversarial dynamics, and professional regulations; three bottlenecks preventing AI from reducing legal costs, including unauthorized practice of law rules, arms-race dynamics in litigation, and the need for human oversight; proposed reforms such as regulatory sandboxes and regulatory markets; and the normative case for keeping human decision-makers in the judicial system.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Justin Curl: The amount of work that each side does could essentially just go up, because now both sides are being hyper productive with AI. Instead of writing like one motion or writing five pages or looking at a hundred cases, they're now doing a hundred X that and all of those relevant domains. So the amount of outputs has increased, but because the outcome that clients ultimately care about is settling favorably or winning at trial, it takes much more work and much more outputs to reach that exact same outcome.

Alan Rozenshtein: It's the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare. I'm talking to Justin Curl, a third-year J.D. candidate at Harvard Law School and Arvind Narayanan, professor of computer science at Princeton University and director of the Center for Information Technology Policy.

Arvind Narayanan: Judges make law a lot of the time, and exercising human judgment about what we want the world to look like. That's the perfect example of what I would want humans to be doing in a world where all conceivable labor can be automated.

Alan Rozenshtein: Today we're discussing their new Lawfare research report, co-authored with Princeton P.h.D candidate Sayash Kapoor, arguing that despite AI's impressive capabilities, structural features of the legal profession, from guild regulations to adversarial dynamics, mean that the technology may not deliver the dramatic cost savings that many predict.

[Main Episode]

So I'm excited to get into the paper that you and your co-author Sayash Kapoor have written about the effect of AI in the legal profession, and specifically why it might not provide the sort of cost savings that everyone is predicting, and at least some people though perhaps not lawyers are hoping for.

But before we get into that, I wanna take a moment to talk about the broader framework of how you're all thinking about this, which draws on this broader project that you, especially you, Arvind, and your collaborator Sayash have thought about and have written a great book about and learned a lot of great writing about, which is this idea that AI is a normal technology.

So just before we get into the law part of this, just sketch out what you mean and particularly what you mean by a normal technology.

Arvind Narayanan: Definitely, let me start with a historical example. Way back when electricity became a thing, there was a lot of hope that it would enable factory owners to rapidly achieve a lot of cost savings by, as I understand it, replacing those big, messy steam boilers with electricity.

But there's a great analysis of this by economic historian Paul A. David, and it turns out when they first started doing that, it didn't really seem to help, and it took 40 years, something like that, to really figure out how to gain the benefits of electricity. And that's by taking advantage of the fact that electricity is a much more portable kind of technology and you can move it and generate it wherever you want.

So that required restructuring the whole layout of factories, going towards the logic of the assembly line, changing how firms hired and paid and trained workers, that sort of thing. And these kinds of downstream innovations, again, took a period of decades.

And really the insight of AI as normal technology is that we can and should learn from these past general purpose technologies. And AI is not exceptional in that way. And a lot of the discourse starts from rapid improvements in AI capabilities to drawing a straight line to societal effects or effects on any particular profession or the economy.

And our view is, no, that's not gonna happen. There are various stages in this pipeline.

So we take this existing framework from the theory of diffusion of innovations and apply it to AI specifically, and we have four stages. The first stage is improvements and capabilities. The second one is how those get translated into better products in law or any other particular domain. The third stage is workers starting to adopt these products and learn to use them. And the fourth stage is really the hardest one. That's where you need to make organizational changes to laws, nor norms, business models, et cetera, which we get into in, in this paper.

But really the overall framework is about looking at all of those four stages in order to understand the speed and nature of AI impacts on any particular profession, as opposed to merely looking at, you know, what is the latest performance of GPT 5.2 or whatever.

Alan Rozenshtein: That's great. But let, and lemme just dig in a little bit before I turn to Justin to sort of set up the law part of this. To be clear, and I just wanna make sure that I'm understanding the argument correctly—When you say normal, you don't mean non-transformative, because of course electricity was quite a big deal. Automobiles were a big deal. The neolithic revolution was a big deal. I mean, a lot of these things are big deals.

It sounds like what you're saying though, it just takes a lot longer than people think. I think there's a quote, I think it's often associated with Bill Gates, and maybe it's one of these quotes that's associated with many people, which is that people systematically overestimate what can be done in the short term, but then they tend to underestimate, because people are bad at compound math, they tend to underestimate what's gonna happen in the long term.

Is it a fair to say that you are at least open to the possibility that in the longish term, AI will have a deeply transformative effect, even if it takes a quite a long time, and there are roadblocks and reforms and a lot of messiness that, as you pointed out, the latest, you know, frontier math accomplishment that, you know, GPT 5.2 Pro or whatever is going around X does not really capture.

Arvind Narayanan: That's exactly right. We are pretty optimistic about the effects of AI in the long run. We do have much longer timelines than a lot of people talking about this, but also we emphasize the agency that's involved. I think these effects are not preordained. I think we need to get a lot of things right in terms of reforming our institutions in order to be able to take advantage of these benefits.

Alan Rozenshtein: What interested you in particular because, you know, Justin's a law student, I'm a law professor, sort of, we are professionally investing in the legal field. You are a computer scientist, so there are any number of fields that you could, and I'm sure, have in your research applied this too. I'm curious if you think that the legal field in particular is a good test case or if it has some unique elements relative to other possibilities for you in particular.

What was interesting about the legal angle here?

Arvind Narayanan: Yeah, lots of things. So let's contrast a few different fields. On the one extreme I would say is software engineering, which like law is a purely cognitive field, and unlike, let's say medicine, which I'm gonna put on the other extreme, although it's not really an extreme, I wanna look at the range between these three fields.

So in software engineering, being a purely cognitive field, we're starting to see the impacts of AI very quickly. But an important way in which it's different from law is that it's not professionalized. There aren't a whole bunch of regulations about who can do software engineering, how software engineering can be done, various kinds of liability, none of that stuff. And so the impacts are starting to hit very quickly.

On the other hand, in medicine, you know, like law is very professionalized, but also you can't as an individual just start, you know, using a model for self-diagnosing yourself. You can, but you very quickly run into roadblocks.

Alan Rozenshtein: Yeah, I think tens of billions of people might disagree with you there, Arvind.

Arvind Narayanan: Yes, for sure. Yeah, they are. But the problem is you can't prescribe yourself something, right. And so you hit a roadblock in terms of the in terms of interacting with the system. Law is very nicely in the middle. It is purely cognitive, and so there is clearly a lot of great potential.

But at the same time, it is very professionalized. There's all these regulations that, that we discuss in the paper, and the reason it's so interesting to look at this profession that's in the middle of these two extremes is that you can really imagine and spell out: What are the kinds of reform that will be needed in order to really take advantage of this tremendous possibility? And those are things that, you know, the professionals can start doing now. It's much harder to articulate that in medicine.

So an example is people say, oh, we're gonna be able to dramatically speed up drug development. Well, the problem is, the hard part of drug development is not discovering new molecules. It's, you know, testing them. The human trials, which take 10 or 15 years, highly regulated, et cetera. It's really hard to articulate how you can compress that down to, let's say a few months without, you know, throwing away what we've learned about testing drugs safely. But when it comes to law, it's not that kind of thing.

You can actually imagine and spell out the reforms, and I think that's what Justin has tried to take the lead in doing in this paper.

Justin Curl: I'll just jump in briefly, I think one way that I also think about the AI as normal technology framework is just as a prescription of where we should be focusing our efforts.

I think it's really easy to sort of view AI as this like all-encompassing tsunami where there's nothing any individual can do to fight back against it or can intervene or shape its development. But AI as normal technology tries to identify systematically what those organizational and societal bottlenecks are so that you can know where you should focus your efforts if you're trying to ensure that, sort of AI diffusion is positively impacting people.

Alan Rozenshtein: So I think one way of looking at your paper is an exploration fundamentally, you know, less about AI and more about why law is so expensive, right? Why the practice of law, why the products of law are so much more expensive and why, you know, we don't see the productivity gains in law that we do with, I don't know, flat screen TVs or many other things.

And so you identify sort of three structural reasons that legal services are expensive, even before AI enters the picture. So you talk about law being a credence good. You talk about how the value of a legal service is often relative rather than absolute. And then of course there are these professional regulations.

So can you just sketch out for the non-lawyers in the audience—What is it about law that makes it so expensive, sort of ex, before we can start talking about any technological developments like AI.

Justin Curl: Yeah, of course. And I think here it's really important to sort of nod towards Gillian Hadfield's work. A lot of this, it comes directly from papers she's written over the past two decades about this. They're excellent. I recommend people check them out.

Starting with the first sort of reason about credence goods. I think one reason law is unique is because it's very hard to evaluate the quality of legal services, even for lawyers and experts in the field.

You could imagine, if I'm engaged in like this complex year long trial, it's hard to know whether a particular motion filed or like a particular decision in one sentence about how to frame a topic is the reason why the client reached the outcome they want. And so instead of being able to directly assess the quality of legal services, you're sort of forced to rely on reputation or things like how prestigious was the law school that someone went to or things like that. And so that makes it very hard to have a functioning market when you're thinking about legal services.

The second reason about the value of legal services being relative is. Also it, it's hard to have like an understanding of legal services in the abstract. It's not like I can look at that and be like that's a seven legal services.

If I'm engaged in litigation, oftentimes whether I'm able to achieve the result I care about depends on what the other side is doing. So if my contract term is really good, might depend on whether the, what the other side is doing and how they're thinking about it.

And then the final, sort of reason that, again, Professor Hadfield identifies is there's a very complicated regulatory framework. There's two types of, I think, regulations that are relevant here. The first is UPL or unauthorized practice of law regulations, which limit who is allowed to provide legal advice that is defined incredibly broadly. So anytime you apply legal knowledge to specific facts, you might be engaged in the practice of law. If you do so without permission or you're not a licensed attorney, that's actually a felony in a lot of jurisdictions.

So that's already one reason why, maybe some of these companies, when their chatbots are providing things that look like legal services, they might actually start to run into some liability for it.

Alan Rozenshtein: And also, I'll also jump in and just add to that. So I am a lawyer, believe it or not, despite being a law professor who I would not recommend hiring for legal advice. I am technically a lawyer. I'm barred in the state of the fine state of New York. And when I moved here to Minnesota, at some point I was just curious, okay, well what does legal practice look like?

Because I don't intend to really practice law anymore, but I don't wanna accidentally, you know, practice law and not doing the right thing. And so I looked at the New York bar rules and what's amazing is there was all these paragraphs about what the practice of law might be, but then they specifically refuse to say what the practice of law actually is, and that they actually also won't tell you.

So it is almost Kafka-esque situation where there is such a thing as the unauthorized practice of law, it's quite a big deal to do it, but no one will actually tell you what constitutes the unauthorized practice of law, which I have to assume just has sort of a chilling effect on this whole industry.

Justin Curl: I think there's two really good examples actually connected to that, that we talk about in the paper. I think one is, if you look at what the New York Bar Association has said, they've been like, well, chatbots, they might be the practice of law. It seems like it's getting close.

And so I don't really know what I'm supposed to do with that. If I'm a lawyer thinking about, or even a consumer thinking about using AI.

Alan Rozenshtein: What units is close, measured in, I'd be very curious how many GPT units is close in this context?

Justin Curl: Yeah that's also what I wanna know.

Arvind Narayanan: Can I ask you if either of, you know, if there have been any lawsuits against the major chatbots?

I mean, I use it all the time for little things like reviewing contracts, and I assume that many people out there are doing that. So presumably these chatbots, you know, at least in my case, are providing legal advice that's tailored to my situation. So I wonder if people have been trying to sue them.

Alan Rozenshtein: I don't personally know of any of these. I mean obviously there are other—the automated legal services is a thing that predates these chatbots and I think there have been some legal, at least regulatory challenges to some of these, you know, I dunno if it's illegal. Zoom has faced these challenges, but sites like that, there are also the countervailing First Amendment considerations where you couldn't get—the Guild could not get too aggressive about this because people also have sort of First Amendment rights to, you know, talk to a chatbot about their legal issues as well.

But I'm curious, Justin if you've heard of anything.

Justin Curl: Yeah, I haven't seen anything focused on AI chatbots specifically, but I think LegalZoom is a good example ,where over the past two decades they've been sued countless times and they've had to rework the actual way that they provide legal services because they've had to reach settlement, very expensive settlement agreements.

And what's interesting about that is who the plaintiff is. Like you, you ultimately need a plaintiff to bring the lawsuit. And so sometimes that's the attorney general of a state, but sometimes that's actual individuals who have received the legal services. So maybe if ChatGPT gives bad legal advice and someone's upset, we might see a new lawsuit about it.

Alan Rozenshtein: So I wanna get into the bottlenecks that you all go through in your paper, but before I do, I wanna address sort of one thing that's not in the paper that actually might be surprising if you're reading a paper about skepticism that AI will automatically lower the cost of legal services, and that is what none of you are arguing, is that AI won't be able to do the actual individual cognitive tasks.

There are a lot of people that think that AI is, quote unquote, fancy autocomplete or stochastic parrots or just a giant plagiarism machine. There are lots of ways of dismissing that and that'll never have the skill or the creativity to be a really good lawyer. Am I right? And let me ask you Arvind, since you also have sort of a broader sense of the AI landscape across different cognitive domains, that's not what you're arguing.

It seems like your paper is happy to concede the possibility, and maybe you all actually believe this. I certainly do. But I'm not a computer scientist. That already today, and certainly within several years on any discreet, even large scale task, like write me a Supreme Court brief, you know, GPT-7 may actually be quite capable of outperforming all but the absolute elite lawyers, and even for those elite lawyers at a tiny fraction of the cost that it takes to, you know, hire Paul Clement to argue your case.

And so all these bottlings are actually totally separate from the raw capabilities. Is that a fair articulation?

Arvind Narayanan: That's mostly where we land. That's right. We're not capabilities skeptics. So let's divide it into two ways of looking at it.

One is some of the current limitations such as hallucinations, or not really having access to all of the documents that it would need in order to do a good job in your case, that sort of thing. These are all easily fixable in our view, you know, especially when you consider the long term of AI development. They're gonna get fixed.

But then you do get into some gray areas. So what it means to write a good Supreme Court brief is not something unlike, let's say, coding, where there are correct and incorrect answers. People are going to disagree about that. These are matters of judgment. So we do think there are some limits there in terms of how good AI can get, because you know, it learns from feedback and it's not going to be that easy to learn from millions of cases of feedback where AI creates an argument and then that brief is submitted, and then you get to learn from, you know, what was what was the result.

In that case, that feedback loop is extremely slow, and so you're not necessarily gonna see the kind of rapid capability progress that you see in, let's say, math or software engineering.

Nonetheless, that's not where our risk skepticism comes from. We are acknowledging that there might be a day where AI is able to do any precisely specifiable cognitive task that most lawyers are able to do.

Alan Rozenshtein: Alright, so let's now then jump into the first bottleneck, which is, which you're talking about just recently, Justin, this question of regulatory barriers.

So explain sort of how that could get in the way of AI really revolutionizing and lowering the cost of legal services especially, and it just says maybe a bit of a counterargument, even if. At the end of the day, the legal landscape. You know, if, you know, if you go to law school in, you know, 2029, 2030, even if the legal landscape superficial looks very similar, there are a bunch of law firms, some are very big, some are medium, some are small.

If all of these law firms are using AI integrally, right? If these law firms are essentially kind of wrappers around these models, why isn't that enough to really have AI revolutionize legal practice?

Justin Curl: And I think it's important to distinguish between two ways that someone could receive legal services.

I think the one that you just mentioned with law firms that more directly implicates the entity regulations piece of professional regulations of the law. And so that is—that limits who can own equity in a legal services business. So I think it's no surprise that all of the law firms are owned by lawyers because you have to be a lawyer in order to own a law firm.

What again, Gillian Hadfield points out is that this can create very inefficient business models. In a lot of the smaller practices serving individuals and small businesses, lawyers work eight hours a day. But of those eight hours, only about 2.3 of them are actually doing billable work. The other six hours of that is just doing administrative tasks and like sourcing clients, things like that.

And so even if AI is very advanced and capable of performing a lot of legal work, the way that AI is integrated into the business might actually be a lot less efficient because of these constraints on how those businesses are run.

Alan Rozenshtein: But let me actually push back on that a little bit because there's a whole cottage industry right now about using, you know, multi-agent Claude swarms to go out and find your clients and to do all your invoicing and all of that sort of stuff, right?

It seems to me that one of the things that AI could do, in fact and this is certainly how I try to use these AI tools, right, which is to sort of automate the administrivia of my, you know, life, whether it's as a teacher or a researcher or a consultant or whatever the case is.

So, I mean, I do wonder if there's a possibility here for these AI systems to be, you know, exactly, actually, the thing that a mid-sized firm that is run because of guild rules by lawyers who, God bless, whatever skills we have, management is often not one of them. You know, maybe what we need is Claude Code actually, just for that purpose. Why isn't that an answer to the management inefficiency problem?

Justin Curl: So I actually think this is a very good application of AI, in part because I think it is a nice niche that is not necessarily covered by the unauthorized practice of law rules. So if I'm outsourcing clients, very few people think that counts as practicing legal services, so this regulatory barrier actually wouldn't really cover those set of applications.

And s one great way to make a lot of smaller firms much more efficient is to go out and automate a lot of the tasks that are taking up their time so that they can spend more time providing legal services. And so I think this is a great application of AI and maybe actually helps prove the point because it shows that when there aren't those regulatory barriers, you can actually use AI to make it much more efficient.

Arvind Narayanan: I wonder, even in some of these administrative tasks, if there are competitive dynamics, certain kinds of paperwork, certainly, you know, there's a fixed amount to get done, but something we say later on in the paper is that one of the big barriers to productivity improvements actually translating to a better version of legal services is that there are kinds of arms races, you know, we talk about arms races between plaintiffs and defendants, and I'm sure Justin will say more about that.

But one of the kinds of arms races that can happen, even in the more management kind of work, is you talked about going out there and finding clients. Well, these are gonna be tools that every firm is now going to be using to kind of level up how effectively they can do that. So, what's gonna be the end result of that process? That seems hard to anticipate.

And we're seeing in other cases, for example, in scientific peer review, for instance, there are these arms races between authors using LLMs to try to improve their productivity and reviewers using LLMs to try to automate some of the aspects of reviewing, and it's leading to some very unhealthy kinds of equilibrium—or not an equilibrium, perhaps it's leading to a kind of debt spiral.

So we should be careful about things that at first appear to be productivity improvements, but can in fact upset existing kinds of balance and end up removing certain useful kinds of friction from the process.

Alan Rozenshtein: So, that's great. And actually let's use that to then pivot to the second bottleneck, which is this sort of adversarial point. Arvind teed it up, but Justin kind of riff on that. What is, I mean, I think everyone has sort of an intuition that law is a somewhat adversarial profession. But to talk more about that, and how that might lead to AI being sort of largely a wash when it comes to the provisional legal services.

Justin Curl: Two things on this. I think the first is it's important to understand like, when are we ending up in this world where this becomes the predominant bottleneck? I think we're, even if we're in a world where AI is being used very widely, and it's being used to make lawyers much more productive, this is still a constraint because if you give both sides access to AI, and you're sort of locked in this zero-sum process, the amount of work that each side does could essentially just go up, because now both sides are being hyper-productive with AI.

Instead of writing like one motion or writing five pages or looking at a hundred cases, they're now doing 100 X that and all of those relevant domains. So the amount of outputs has increased, but because the outcome that clients ultimately care about is settling favorably or winning at trial, it takes much more work and much more outputs to reach that exact same outcome. And so, although AI has made both sides more efficient, you end up doing a lot more work.

The second thing on this, and maybe the historical analogy here, is the discovery process. A lot of people thought that digitization was gonna make discovery way, way easier because you can now just “control F” for documents. So it's much easier to find the relevant documents.

What they didn't expect was, now both sides, there's just much more documents being created. Digitization means you can now request a lot more documents and share a lot more documents, and the net result is now discovery consumes like half the time that first year associates spend doing.

And they're also—it's become one of the most expensive parts of litigation and litigation costs have not actually come down. They've, if anything, gone up in a lot of the complex cases.

Alan Rozenshtein: How much of this is about—is a litigation story versus a sort of general law story, right? So again, I think most people, when they think of law, they think of litigation, and that's obviously a large part of it.

Litigation is only one, I'm not even sure it's the plurality, frankly, of legal practice. I suspect transactional work is actually right, especially when you include smaller scale stuff like, you know, wills and things of that nature is probably, again, if not the majority, then the plurality of legal work and then of course is a bunch of in-house stuff.

So how much of this adversarial kind of arms race problem is a litigation story, and how much of it also bleeds into, let's say, transactional work?

Justin Curl: I think some of it definitely bleeds into transactional work. I think it, again, depends on the dynamics within transactional work. When you touched on wills, to me that doesn't seem like there's a clean adversarial process. 'cause the goal is just sort of to match the intent of the person who wrote the will. So, like,

Alan Rozenshtein: It's not the ‘who's on the other side.’ Right?

Justin Curl: Exactly.

Alan Rozenshtein: You know, God gets it all in the end.

Justin Curl: And then in some sort of transactional context though, like say you're negotiating a merger between two parties, that, to me, starts to seem a lot more adversarial.

Like oftentimes transactional lawyers distinguish their work from litigators by saying no, we're much more positive sum, it's much more collaborative. But at the end of the day, if how you draft your contract provisions, what you choose to include what you disclose to the other side.

There's a very fine line between what is and is not okay. And how you skirt that line can actually translate into advantage for your side. And so you may end up using AI to take advantage of that.

Alan Rozenshtein: Arvind, let me ask you, go back to you mentioned sort of earlier that obviously, law is not the only place where you have these arms races. You gave some examples.

The examples I was thinking about was actually, for example, trading. Right? Which seems like a perfect example of this, right? You know, we already seen this before AI, where you have these sort of massive high frequency trading outfits that are spending, God knows how much money, but it's sort of not clear that they're making anything necessarily that much better because there's just someone else on the other side.

I'm curious, again, zooming out and from your work, thinking about AI as a normal technology across the entire economy, how much of the economy, how much of a productive economic worth is vulnerable to these sorts of, kinda adversarial conditions where the result of AI is not just, is not really lower cost, it's just everyone using AI more to sort of try to beat each other.

Arvind Narayanan: Yeah, it really comes up everywhere. Trading is, of course, a perfect example. I remember 10 years ago there were proposals for exchanges in the middle of the ocean, something like that, because the speed of light was becoming a constraint in high frequency trading. And so you, that's an example where, I don't know if they ever ended up actually building it, you know, in between London and New York and the Atlantic Ocean, but it's a perfect example of sinking a lot of money into something.

That brings benefits that are purely relative, right? If neither side has access to it, you haven't lost out on anything. Your, you know, trades are a fraction of a second slower. You can't argue that's actually a benefit to society to build these things in the middle of the ocean. So yeah, these dynamics come up literally everywhere.

We just talked about peer review. But there's this great book called “Bullshit Jobs” by David Graeber. And he has I think, five different categories of bullshit jobs, but one or two categories of them are all about how so many different jobs and every different occupation are not necessarily about providing the service better, but doing it better than your competitors.

And so better on that dimension doesn't actually translate to better service for consumers. So this is not specific to law. It comes up really all across the board.

Alan Rozenshtein: Yeah, I remember that book. I think the essay from which it comes from is even better 'cause it's a nice tight read. And I do recall corporate lawyer was one of the main examples that, that he gives.

Alright, so let's now turn to the third bottleneck, which is this need for human involvement. So, Justin, what is this need for human involvement in the law? Why can't we just have, you know, robot lawyers arguing in front of robot judges while I sip, you know, daiquiris on the beach?

Justin Curl: Well, okay, so you definitely could have that world. I personally would not really want to live in that world. I think even the most sort of AI pilled people out there are still hesitant about the idea of turning over judges in society to AI. I think there's also compelling constitutional reasons not to do that, namely Article Three.

But putting that aside, I think the human element is—assuming we want human beings to be involved—this is, there is a limit to how quickly judges can process cases. So if you imagine sort of this, going back to this litigation example. Both sides are producing a bunch more work. They're writing much more sophisticated briefs, they're citing more cases.

Also, it probably becomes easier to file lawsuits, so there's just a lot more lawsuits. As that happens the new bottleneck becomes the time it takes for judges to adjudicate those cases. Going over to the transactional side, I think the new bottleneck, as contract provisions get longer and these negotiations become more complex, I think the bottleneck becomes the ability for human lawyers to actually understand what's going on behalf of their clients.

I, for one, would want to be in a world where corporations know what they're signing up to when they're signing up for contracts. And so I think having someone inside of the corporation who understands this is the, is sort of the final bottleneck. So even though AI makes things a lot faster, there is still this thing of how quickly can human beings work.

Alan Rozenshtein: So I, the normative question about do we want human judges, human decision makers, the legal question as a constitutional crime. That's interesting. Let's put that to side for a second. 'cause I think the kinda psychological assumption or the empirical assumption that as a matter of human psychology, as a matter of what is sometimes called sociological legitimacy, which is whether or not the system is a good one, do people perceive it to be a good one?

Whether that requires human decision makers—I'll admit, and maybe this just shows how out of touch I am with actual human beings and that I should log off Claude Code more—It is not obvious to me actually that there is going to be such a demand for human decision makers outside, you know, let's put criminal law, for example, to a side, but outside sort of the most high salience context.

I mean, it certainly seems to me that the sort of story of modern human sociability over the last 20 years is the increasing replacement of sort of human connection and human engagement with digital connection and digital engagement. And again, that may be a bad thing, right? That may be quite possible, but it still appears to me to be a thing.

So I'm curious for your thoughts, Justin, about whether that might be a possibility in the legal sphere, and then also zooming out, Arvind, for your thoughts about how, you know, we might think about that in, in other domains, right? Because you could presumably tell the same story about education or mental health therapy, but at the same time, I don't know, maybe it's possible that in 10 years we'll all be using chatbots as the bulk of mental health therapy and people just got sort of used to that.

Because, you know, humans are malleable creatures. So let me start with Justin and then I'll move to Arvind.

Justin Curl: I know you said put the normative considerations aside, but I have to fight the hypothetical on this one just because I do think if you're making a decision about whether someone has like 10 years in prison or not, that is such an important decision that it carries such moral weight that I would want a human being to be involved in that.

Alan Rozenshtein: Sure. But again, like criminal law is still a relatively small percentage of legal practice, and I think the sort of economic story that you are talking about is also more relevant to commercial litigation and the commercial practice of law than the criminal practice of law, which is why I'm sort of curious about the sort of like 90% of legal disputes that are not as high stakes or salient.

Justin Curl: Yeah. And so I think this is where one of our reforms actually is to have sort of parallel tracks. And so you could imagine you have judges for the context in which human involvement is most important. And then for those less important things, you have a parallel track such as through arbitration.

And there are a lot of problems with arbitration about whether it's like actually people are consenting into it properly, but assuming that they are. You can have sort of, these AI judges are using it as a way to make the process more efficient in contexts where you're less worried about the stakes. And the stakes do seem lower, and so that might be a way to sort of, if there's a finite pool of human attention and human time that we can allocate to these tasks, we should allocate it in a way where it's most needed.

Arvind Narayanan: So let me share a couple of thoughts, both specifically for judging, but also like you said, zooming out, Alan. One thing I'd say is even in some world where, you know, AI could make these decisions, what do we want the humans to be doing in that world?

To me, you know, judges make law a lot of the time and exercising human judgment about what we want the world to look like—That's the perfect example of what I would want humans to be doing in a world where all conceivable labor can be automated.

I mean, I think that should be literally the last thing to get automated again for normative reasons, regardless of whether it can or can't be automated.

Alan Rozenshtein: Well, let me ask about that, 'cause I'm curious to sort of push on that intuition. Where is that normative commitment coming from? I mean, because you can imagine a lot of our arguments for you could say, well, it's because I think that humans will always do a better job, at least in some cases, right, in exercising that judgment than AIs do.

In which case I would say—we did earlier, however, kind of stipulate that at least in a lot of domains, AIs are getting quite good, and then you'd have to have a, I think a somewhat rosy view of how good human decision makers are, at least kind of the median human decision maker.

Or you might say, I worry that if we outsource those decisions to AIs, we ourselves will kind of have an almost moral de-skilling, right? We will lose the capabilities.

Or you might say, you know, human psychology just requires something carbon based to pass judgment on me, and it'll just rebel and society will fall apart, right? If we have this done by computers.

But I don't know. I'm curious to sort of push you on just to clarify where that intuition is coming from.

Arvind Narayanan: Yeah. It's a simple answer. I think this is what it means to be in control of our own civilization. This, I mean, all those debates about AI safety, this is actually what it boils down to for me, not killer terminator robots. These kinds of moments where we put the course of humanity in the hands of machines.

I think that's a line we should not cross. I mean, historically, look at any cases that had a significant impact on how society functions, let's say Brown versus Board of Education. Is that something we would want to have been decided by a robot? I don't think it's a matter of accuracy. There are no accuracy standards by which you can claim that AI is doing a better or worse job than a human judge.

It's purely a normative question. It does not have an empirical component. And you know, I would say that a world in which we leave these kinds of decisions up to AI is not a world I wanna live in. And hopefully the majority of people.

Alan Rozenshtein: And just to clarify, and it's not, it sounds like because you don't think AI could have written Brown versus Board of Education, it's just that the whole point was that humans decided like—

Arvind Narayanan: Correct.

Alan Rozenshtein: It is a sort of, that's the whole point of democracy, right? Is not that we come, not necessarily that we come to the best decisions, but they are fundamentally our decisions. Is that the intuition—

Arvind Narayanan: That's right.

Alan Rozenshtein: I'm not pushing back on it, but I think it is useful to clarify where those intuitions come from.

Arvind Narayanan: Yeah. This is what it means to have agency as a species. These are the biggest decisions that we make about the course of our societies.

Alan Rozenshtein: Okay, so, so we talked about the impediments to the broad diffusion of AI, the unauthorized practice of law issues the adversarial dynamics and just the needs for, you know, humans and therefore humans will be a bottleneck.

Let's talk briefly about some of the reforms and the solutions that you all propose. But before we get into them, I do wanna ask kind of a meta question.

It sounds like, at least as I read the paper, you all do think that we should have more AI in legal service. I mean, a lot of these reforms are meant to facilitate the spread, but one could, I think just as easily look at your analysis and say, oh, thank god we have these roadblocks, right? We want there to be these really strong, unauthorized practice of law rules. I'm not sure anyone would say we want to have these bullshit jobs where people are just creating costs, but certainly, I mean, Arvind, I think he's just very eloquently set out your argument for why humans being in the loop is just a fundamental axiom of, you know, what it means to be human and have a human-led society.

Why isn't the last third of your paper a thank god, and we should do everything we can to keep AI out of the legal profession. Yeah. Arvind, why don't you start?

Arvind Narayanan: Sure. I mean, I think there's gonna have to be some line drawing exercise. I think in every profession, you know, people will argue about what kinds of aspects of what it means to do that job are fundamentally human, and which ones can be delegated to a machine, certainly in the legal profession, and really any other profession, there are lots of things we've chosen to delegate in which we're comfortable with.

I think in a way law is starting from a really good place because there are all of these restrictions currently, and so it's kind of opting in to AI. We have to choose to allow AI to be used for certain things, and it's not like by default AI is gonna replace judges and lawyers. So in a way that's good. And so that is something that I celebrate.

But I don't think the current equilibrium is the optimal one. I think there are a lot of things that make sense to delegate. And just a simple example being access to justice for people who can't afford a lawyer being such a scarce thing. And if AI can be used to enable public defenders to be more productive, that would be a big win. So it's an existence proof that there are some tasks where that line is is not for me to say, but I don't think the current equilibrium is the optimal one.

Alan Rozenshtein: Well, lemme ask the same question to Justin.

And I think your perspective is particularly interesting here because, you know, you're a third-year law student, you're graduating in a few months and you're entering a world, a legal profession that is radically changing. And I suspect you can see that even more than you know, many of your classmates.

Now, I think you've made a very clever bet in focusing on AI because even if legal profession goes away, there will always be, I think demand for your expertise in thinking about AI in the legal profession. But I'm just curious from your perspective. I mean, I can imagine you, you know, you as a student saying, I would, I do not want AI interfering with my, you know, future job prospects.

And so, I'm curious where you come down on this on this point?

Justin Curl: Yeah. I actually think it, it takes us back to the beginning of the conversation where, ultimately the AI as normal technology is a prescription about where we should focus. And I view sort of these reforms and these bottlenecks as opportunities of where we should be allocating our time and attention.

Because ultimately one thing I think you'll learn going through law school is just there's so much that needs to be fixed and there's so much like, there are so many problems with our current legal system, and I view AI as partially a way to fix those problems, but also as a way to sort of push through or motivate the reforms that we've needed for a long time that aren't actually about AI.

Like there's a lot of problems with our access to justice system that aren't really AI problems, but maybe now that people are thinking critically about how should we redesign our system in light of AI, we can start having a better system generally.

Alan Rozenshtein: So let me end the conversation, then with one of the reforms or kind of grouping some of the reforms under one category, which at least to me jumped out as the most interesting, and that is really changing these unauthorized practice of law rules.

And so just talk me through what that would look like, and then also how you'd respond to some of the concerns that, well, the reason we have unauthorized practice of law is the same reason we have unauthorized practice of dentistry, right? We do want some consumer protections around this.

And so why doesn't that mean that sort of the proposals that you all suggest, you know, the regulatory sandboxes that are in places like Utah, you talk about what those are or your paper mentions Gillian Hadfield's idea of regulatory markets where you have sort of the government regulators certifying private regulators and it's those private regulators and in a kind of market competitive sense, then regulate the individuals almost in a way that, you know, the government recognizes certain accreditation bodies and those accreditation bodies then accredit schools.

You know, all of this is very clever, but it ultimately is in the service of weakening unauthorized practice of law regulations. And someone might argue, I'm not sure I would argue that because it worries, I worry that it's just guild capture. One might argue that all is just making, you know, customers of legal services more vulnerable.

So lemme start with Justin and then I'm curious Arvind actually, how you think about those issues more broadly, 'cause as I mentioned, very similar issues can come up in, in other professions and even in non-professional settings like software engineering.

Justin Curl: Yeah I think ultimately if the purpose of unauthorized practice of law regulations sort of in their strongest form is to protect consumers from sort of unethical practitioners that are giving bad legal service.

I just am not very compelled that they're actually doing that good of a job of it right now, it seems to be making things much more expensive. And there's some people who've passed the bar who give horrible legal services. And then if you think about in the debt collection context, 70% of people are losing by default because they didn't actually respond to the lawsuit because they didn't afford a lawyer.

If you look at some of the most consequential, sort of legally relevant decisions in our life, like whether you're getting divorced or whether you're getting evicted. People just don't really have access to lawyers. And so I just don't think that UPL rules are serving their intended purpose right now.

And so that's sort of why I'm for changing them in some way.

Arvind Narayanan: I'm trying to say something useful by comparing it to other domains like software engineering, without making it sound like, you know, I'm giving advice to lawyers on how to run their profession because that's not for me to say. Okay, so lemme say this.

Certainly, it's easy to understand the motivation behind unauthorized practice of law rules, but I think as Justin said, there are currently not that great at serving their intended function, and they seem to have all of these unintended consequences, guild capture as you might put it, that are deeply problematic.

I think there could be other ways of ensuring that consumers are not harmed. It's for me as a non-lawyer, not a legal scholar, it's not really for me to say what those are, but may, you know, maybe this moment of upheaval around AI is a time when we can have a lot of innovation around, you know, the way we regulate different professions and what institutional and organizational structures we put in place.

Software engineering is one example of a field, like I was saying earlier, is not professionalized, but still has a mix of various kinds of checks in place to ensure that horrible outcomes don't result. So maybe there's something to look at from different fields. Maybe we don't have to put all of our weight into unauthorized practice of law rules.

Alan Rozenshtein: I think it's a good place to leave it. Arvind, Justin, thanks for coming on the show and for writing a great paper. It's very interesting and I do hope that both optimists and skeptics of AI in the legal profession get a chance to read it.

Justin Curl: Thank you.

Arvind Narayanan: Thank you, Alan. This has been really fun.

[Outro]

Alan Rozenshtein: The Lawfare Podcast is produced by the Lawfare Institute. If you wanna support the show and listen ad-free, you can become a Lawfare material supporter at lawfaremedia.org/support. Supporters also get access to special events and other bonus content we don't share anywhere else.

If you enjoy the podcast, please rate and review us wherever you listen. It really does help. And be sure to check out our other shows, including Rational Security, Scaling Laws, Allies, the Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. You can also find all of our written work at lawfaremedia.org.

The podcast is edited by Jen Patja with audio engineering by me. Our theme song is from Alibi music.

And as always, thanks for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Justin Curl is a J.D. candidate at Harvard Law School currently serving as the Technology Law & Policy Advisor to the New Mexico Attorney General. He's interested in technology and public law, with a research agenda focused on algorithmic bias (14th Amendment), binary searches (4th Amendment), and judicial use of AI. Previously, he was a Schwarzman Scholar at Tsinghua University and earned a B.S.E. in Computer Science magna cum laude from Princeton University.
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil, the essay AI as Normal Technology, and a newsletter of the same name which is read by over 60,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan was one of TIME's inaugural list of 100 most influential people in AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare