Cybersecurity & Tech

Scaling Laws: Eugene Volokh: Navigating Libel and Liability in the AI Age

Kevin Frazier, Eugene Volokh
Thursday, July 17, 2025, 12:00 PM
Discussing the complexities of libel in the age of AI.

Published by The Lawfare Institute
in Cooperation With
Brookings

Kevin Frazier, the AI Innovation and Law Fellow at Texas Law and a Senior Editor at Lawfare, brings Eugene Volokh, a senior fellow at the Hoover Institution and UCLA law professor, to explore the complexities of libel in the age of AI. Discover how AI-generated content challenges traditional legal frameworks and the implications for platforms under Section 230. This episode is a must-listen for anyone interested in the evolving landscape of AI and law.

 

The two dive into Volokh's paper, “Large Libel Models? Liability for AI Output.” 

Extra credit for those who give it a full read and explore some of the "homework" below:

*This episode was aired on the Lawfare Daily podcast feed as the July 17 episode*

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alan Rozenshtein: It is the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota, and a senior editor and research director at Lawfare. Today we're bringing you something a little different, an episode from our new podcast series, Scaling Laws. It's a creation of Lawfare and the University of Texas School of Law where we're tackling the most important AI and policy questions from new legislation on Capitol Hill to the latest breakthroughs that are happening in the labs.

We cut through the hype to get you up to speed on the rules, standards, and ideas shaping the future of this pivotal technology. If you enjoy this episode, you can find and subscribe to Scaling Laws wherever you get your podcasts. And follow us on X and BlueSky. Thanks for listening.

[Main Podcast]

When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's, it's not crazy. It's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.

Kevin Frazier: Who's actually building the scaffolding? Around how it's gonna work, how everyday folks are gonna use it.

Alan Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions have to be figured out and nobody came to my bonus class. Let's enforce the rules of the road. Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI, law, and policy.

Welcome back to another edition of the AI Summer School.

I'm Kevin Frazier, the AI Innovation and Law Fellow at Texas Law, and a contributing editor at Lawfare. Today's class dives into one of the most complex and controversial aspects of AI and the law: libel. Eugene Volokh, a senior fellow at the Hoover Institution and longtime professor of law at UCLA is an expert in the field and penned a paper on the topic back in 2024.

For those looking for extra credit, be sure to read in its entirety, including all appendices, “Large Liable Models: Liability for AI Output,” and we've got a link in the show notes. For those content with a P or a passing grade, our conversation's going to cover the essentials, so we've got you covered there.

As always, we'll use our standard format here. First, we're going to explore the fundamentals of the law in particular. We're going to dive into libel, and then we'll spend a bit of time looking at Section 230, the First Amendment before Eugene will detail how AI maps onto these key aspects of the law.

Finally, we'll discuss some open questions and let you get on with your day and hopefully get into our homework. Alright, Eugene, thank you so much for joining the AI summer school.

Eugene Volokh: Thanks very much for having me. It's funny that you talked about my having pen to paper even though I never used a pen and they would read it on paper. It's funny.

Kevin Frazier: There we go, and, and soon

Eugene Volokh: The legacy media we've inherited through our language, even a computer, we talk about computers. But in most situations, we're not using them primarily to compute, although of course a good deal of computation goes on in the background. They're data processors, the word processors, they're communicators, they're entertainment centers.

Kevin Frazier: There we go. Well, I, I guess now too my, my Lawfare colleague and co-author Alan Rozenshtein and I have, have talked a lot about how scholars will use generative AI to pen a paper. So what do we say then? Do we say we generated a paper?

Eugene Volokh: Genned, let's call genned.

Kevin Frazier: Genned. I like it. Okay. We're already creating new vocab. It's the sign of a great class. Gen, that's gonna stick with me. All right. So we'll get our TM there on gen. This is great.

Well, Eugene, a key aspect of your paper is libel. And for folks who have forgotten their free speech course or perhaps never took a free speech course and just skipped straight to the bar, what is libel? What are the key things we're looking for when we're talking about libel?

Eugene Volokh: So to oversimplify, libel means false statements of fact about a person or a corporation—for-profit or nonprofit—that damages that entity’s or person's reputation. And it oft-, in order to prove up a liable case, you often have to show certain kinds of mental state.

Famously, for example, if you're talking about public figures or public officials, you have to show so-called actual malice, which is not actually malice, but it means is knowing a reckless falsehood. For speech about private figures, for if you can show actual pretty provable loss as a result of damage to reputation, well then negligence might be enough.

So those are generally speaking, the elements of libel law. And of course the libel could be by, it has to be in writing, generally speaking, but it could be could be handwritten, could be printed. And of course it could equally be on a computer.

Kevin Frazier: So it could be genned. We'll get to that in a second. We will get to that in a second.

But, thinking also about libel a couple key considerations come to mind that we'll map on later, but can you talk a little bit more about this publication requirement? Obviously, if I just whispered a libelous state or genned a libelous statement and handed it to my partner and didn't share it with the rest of the world, would that be of concern or what's this publication requirement?

Eugene Volokh: Yes. Yes, that would be liable. The public, there is a publication requirement in libel law, but as with actual malice, lawyers have this habit of using words in ways that that differ from how ordinary humans use words. Publication for purposes of libel law merely means communication to one person other than the person being defamed.

So if you write a letter and to a friend saying some third party has done these bad things that could be liable. Classic examples of that kind of libel were historically letters sent to someone who's about to get married, saying that their prospective spouse has committed various kinds of misconduct.

Another example which is very common today, and or I shouldn't say very common, but it, it's a fact pattern that we continue to see today is job reference. So somebody says, oh, I wouldn't hire this person because, because he was fired for stealing from petty cash or even he has, he has acted incompetently in some particular specific ways. Even if it said to one person, the prospective future employer, that could very well be, be libelous, or perhaps we may say more broadly defamatory, 'cause similar rules apply to slander, which is oral defamation.

Kevin Frazier: And thinking about just passing a letter on to someone, what if I qualified and I say Eugene, I really wouldn't recommend hiring Alan because I've heard from other people, I can't verify this, but I've heard from other people that Alan's jokes are just the worst and you're gonna tire of them very quickly.

Eugene Volokh: Well, depends if you're gonna, if I'm hiring Alan for, for job as a comedian.

Kevin Frazier: I wouldn't recommend it. But let's just, let's just say for the sake you are.

Eugene Volokh: Exactly, the, the only reason I quibble about that is that not every statement that is negative about a person, not even fact, every factual assertion that's negative about a person is, is defamatory. It has to really threaten their reputation in, in, in a fairly serious way.

And one classic way in which it could not, the only way, by any means, is by suggesting they're incompetent in their profession. By the way, one other factor is it has to be a factual assertion and statements that I don't like as jokes, or even the jokes are very bad jokes is almost always gonna be seen as a matter of opinion 'cause humor is a matter of opinion.

On the other hand, if I say I wouldn't hire this person because rumor has it that he was fired from a previous job for getting drunk and physically attacking a customer. A factual assertion, something that would indeed materially injure the person's reputation, and in part because it suggests that commit, tends to commit crimes and is also not competent in, in his job. Yes, that would be potentially libelous even if you qualify it with rumor has it.

That, I, I oversimplify here some courts have departed in some measure from this. They may say, well, you know, if there is such a rumor, then, then you're not saying something false when you pass it along. But the, the predominant view is that passing along a rumor even, even while saying that it's a rumor is generally, is generally actionable.

There are actually some exceptions and situations where you should be entitled to pass on rumors of usually kind of one-to-one communication to people you have a relationship with rather than a statement to the public or to strangers. So again, it's a complicated body of law, but generally speaking, a disclaimer that says, you know, this might be false, but I'm gonna pass this along anyway, does not prevent a defamation liability.

Kevin Frazier: And just to stick on that idea of a disclaimer for a little bit longer, let's say I'm a particularly cautious lawyer and I am very fearful of being sued for defamation. So anytime I pass every text I send, every email I send, I say, I, Kevin, am unreliable, sometimes I make things up so don't trust anything I say in this email, don't assume that it's factually accurate. Would that allow me to get by defamatory statements?

Eugene Volokh: I, I very much doubt it. I mean, I don't know of any case law on point, 'cause very few people actually are that candid about their, their lack of reliability.

But again, I think it's the same principle as rumor has it. When you're passing along an assertion about someone, even if the listener understands that they can't take it to the bank, it could still be quite damaging to that person's reputation. For example, someone considering whether to hire someone might say, look, you know, maybe there's only a 60% chance that the accusations that were passed along to me are true. But I don't wanna run that risk, especially when I can hire someone about whom these accusations haven't been made.

So as a general matter, I mean, I think human beings understand that other humans are often unreliable. Sometimes there may even be a signal, such as rumor has it, or I've heard that, or they say that that kind of accentuates the, the, the possibility that the statement may be, may be unreliable, but that is generally not enough to avoid defamation.

Kevin Frazier: Okay. And so shifting a little bit to where we've seen concerns about libel pop up well since the dawn of the internet has been our social media platforms or internet forums where we've seen folks go to that blog or go to that social media site and libel someone make some factual assertion that may harm their reputation. How have those platforms managed to evade liability? If you could walk us through, when does Section 230 apply, and what is the general kind of values animating this idea of Section 230?

Eugene Volokh: Yeah. So, of Section 230, which is Section 230 of Title 47 of the U.S. Code provides for pretty substantial immunities for online platforms when they're passing along, material produced by others. So, (c)(1), which is the most relevant section says no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

So, if somebody posts something on Facebook, Facebook is the one that's distributing it to the world in a sense. It's Facebook's actions that cause the most damage in many ways because if the person just posted it on his own, on his own computer, virtually no one would see it. But Facebook would not be liable because this, it was information provided by another information content provider, that is to say the user. So the theory is sue the user who created the information and not the platform that is merely redistributing the information.  

Kevin Frazier: And to get behind the original impetus for Section 230, can you walk through how some of these concerns about chilling speech really brought about the, the move for Section 230?

Eugene Volokh: Sure. So like old statutes, Section 230 is animated by multiple concerns. But one of the concerns was indeed that if platforms were held liable for material that's posted by their users, then they would be they would have too much of an incentive to take it down or maybe never even put it up. So one extreme might be platforms may just go out of business or never go into business because the risk of liability is too high, they can't get insurance because of that risk of liability and such.

Or perhaps somewhat more likely, they’ll to go into business. But the moment someone sends a complaint saying, this is libelous towards me, they would, they would do this calculation and say, well, if we take it down, then we alienate a user, but we're not gonna be legally liable, among other things our terms of service say we can take down anything we want, anytime we want.

But if we leave it up, then we might have to pay hundreds of thousands of dollars or maybe more to lawyers to defend ourselves. And if it turns out that the statement really was mistaken, how do we know? The user's the one who who made the assertion, they're the ones who have the facts. If it turns out it is mistaken, we could be on the hook for millions of dollars for defamation liability.

Recently, we've seen potentially almost a billion dollars in defamation liability. That was a settlement in one of the cases brought by election machine company saying that it was defamed. There's also a two thirds of a billion dollar verdict that was recently entered against Greenpeace for allegedly participating in defamation of, of companies involved in the North Dakota pipeline project.

So, so as a result platforms would say, look, you know, the moment somebody files a complaint, we're gonna take stuff down. And that would mean that entities that are willing to be litigious, could be, could be individuals, could be businesses, could be, could be nonprofits, could be churches, would be able to, to get to get criticisms removed.

So as a result and again, there are other, there are other concerns involved, but as a result Congress said, look, we're not gonna completely eliminate liability for online libel. We're just gonna put it put it on the shoulders of the, the people who actually posted the material.

Kevin Frazier: And generally, can you frame how that may comport with, again, obviously with the caveat that there are a lot of values that are baked into the First Amendment or mapped onto the First Amendment. How is Section 230 generally framed as fitting in with some of the broader narratives we talk about when we talk about?

Eugene Volokh: Yeah, well, it's complicated. So let's go back to New York Times v. Sullivan, the most famous libel case of them all 60 years old now, but, but still, still good precedent. And it concluded that libel law was substantially constrained by the First Amendment.

Throughout American history, it's been understood that libel liability has to be judged by standards of freedom of expression. Historically, it had been, the court's conclusion had been that libel law is consistent with free expression principles. But New York Times v. Sullivan said it needed to be cut back. But how far?

So the majority, which was six justices, said that when you are speaking about matters of public concern regarding public officials you should not be held liable for defamation unless you know the statement is false or know the statement is likely false and are just recklessly published despite that the knowledge or recklessness standard, again, sometimes confusingly called the actual malice.

But three justices would've gone further. They would've said that's not enough to eliminate the chilling effect of libel law, the deterrent effect on libel law on publishers and speakers. Because even with this heightened standard that's required for the plaintiff to prove, still, a lot of times newspapers and other speakers will be deterred, will be unduly deterred from publishing even things that are true for fear that a jury would say it's false and that a jury would also find knowing a reckless falsehood, so they would've completely categorically eliminated liable law at least as to matters of public concern. Maybe only as to public officials, but the logic of the opinion seems to suggest sustaining matters of public concern. And those were Justices Black, Douglas, and Goldberg would've taken that field.

But the majority rejected that view. The majority wasn't willing to go that far. Majority written by the way, by someone who generally thought of as an arch liberal Justice, Justice Brenan who had long been a protector of free speech.

So, the First Amendment provides for considerable protection for speech, but also aims to retain some considerable scope of, of libel law, of asked again, false statements that damage people's reputations. Section 230, in some respects is similar. It too tries to draw a line that aims at protecting speech but, but at the same time, not completely eliminating the defamation liability, liable liability.

But it just, it draws the line somewhat differently that under, under New York Times v. Sullivan, probably when, can't be sure because Section 230 prevented these cases really from, from, from coming up to determine First Amendment liability, but under New York Times v. Sullivan, probably social media platforms would have some liability. once they're on notice, once they know a statement is false.

They know of the statement, they have been alerted to what makes it false, they know it's false, or at least likely false. Probably there would've been subject liability there, which would've created sort of a notice and take down type of regime. Not necessarily a great regime, but that's probably what it would, would've led to.

Section 230 goes further in protecting platforms even more, and therefore, in a sense, goes further in undermining libel protections even more.

Kevin Frazier: And important thing to point out for our gunners, but for the rest of our students, always read the dissent, right? Uncovering some very interesting threads here.

And I think, Eugene, one thing that stands out to me is how this mapping on of dignity concerns has been a key consideration under the First Amendment for decades, if not longer. We don't need to go all the way to the founding just to see the importance of those dignity and reputational interests to balancing some of these various considerations. And so taking all of that legal foundation and now moving into the AI context.

Eugene Volokh: Sure. I could interrupt for just a moment.

Kevin Frazier: Yes, please.


 

Eugene Volokh: I just wanna, please wanna balk a little bit at framing of this as dignity.

Kevin Frazier: Yes.

Eugene Volokh: The defamation is often called dignitary toward, or sometimes called a dignitary tort, one of the dignitary torts.

But as a general matter, speech that merely injures someone's dignity is constitutionally protected, at least is on a matter of public concern. We see that in cases like Hustler v. Falwell involving the scurrilous cartoon parody cartoon trying to, trying to succeeding perhaps in injuring Jerry Falwell's dignity.

Snyder v. Phelps, which was the funeral picketing with really nasty messages about, about soldiers, about gays that sort of like the, the line people most remember is they had signs saying God hates fags, right? And this was a thousand feet away from a, from a military member's funeral, you know, that's something that might be seen as very seriously damaging people's dignity, but that is constitutionally protected.

So it's not so much just dignity as protection against false statements and protection against false statements not only is harmful to the plaintiff, excuse me. False statements are not only how harmful to the plaintiff, they're also potentially harmful to public debate. Right, so this is one of the things that people have been talking about with regard to, for example, false statements about election results and such is they could undermine public debate, undermine the search for truth because they are false.

Now, not all such statements are constitutionally unprotected because there's really danger in restricting even, even false statements. But when they, the combination of undermining public debate and damaging a person's reputation that is something where the courts have recognized there is there is substantial room still left for, for defamation.

Kevin Frazier: And moving into the AI space, you provide appendices full of case studies of where we may see libelous statements generated by AI tools. So the initia-, in the introduction itself of the article, you outline asking a model, prompting a model to detail for you the criminal rap sheet, the, the, the crimes that a RR you use in individual's acro-, initials what this RR has done. And ChatGPT or whichever model you were using, reports that there have indeed been allegations of criminal activity by RR.

So what makes libel analysis complicated in the AI context? If we could just start with what are some of the key issues that don't allow us to just say, oh, okay, well we knew what libel looked like in 2022 before ChatGPT 3.5, and we know what it will look like after it. What's the, the complicating factors in this analysis?

Eugene Volokh: Well, so each one of the elements obviously lawyers are going to be fighting over. I think some of them should be pretty easy to establish, but, but some people might disagree. So, for example people are aware that AI models sometimes hallucinate, there are disclaimers that AI, AI, AI provides.

Now, at the same time, of course they, they are seen as sufficiently useful that search engines now automatically include AI generated output in at the very top of, of what they generate often. So, so I think that that those disclaimers are not gonna be enough to completely, completely preclude liability. If the disclaimer said, look, this is fictional, this is just, this is just a joke we're putting together like a, I don't know, a magic eight ball or something like that, then that might be enough people say, okay, this is obviously, obviously fiction but if the disclaimer simply says there might be errors here, that's generally not enough.

Likewise, I don't think there's Section 230 immunity for the platforms. Because remember-

Kevin Frazier: Before we move on to Section 230, just to, to hang on to this idea of disclaimers, because I think a, a really good point you make is, it would be one thing if the models were saying, or excuse me, if the AI labs were saying, hey, you know, we're generating a new eight ball that you can shake and it's gonna come out with outputs. And, ha, that was funny, right? It says they did commit a crime or they did you know, break that person's foot, whatever.

But you point out that these labs are quite invested in making reports and press releases about, look how well it did on the bar. Look at how it's replacing doctors. Look at how you can rely on this to replace that intern. So it's not as though they aren't trying to make these -

Eugene Volokh: Exactly.

Kevin Frazier: -as accurate as possible. So I think that exactly that does a lot of work for your argument.

Eugene Volokh: Exactly. Exactly. I think, I think you, you've, you've hit, hit the nail on the head with that.

I think it, it has to do with the way that the AI companies themselves are promoting among other things in the course of justifying the tens of billions of dollars that have been invested in them. They're promoting, this is something that is not completely reliable, but, you know, nothing in the world is completely reliable.

They're pro, they're promoting it as it's reliable enough that you should use it. Well that's, so then it's unsurprising that people would view it as reliable enough that they might refuse to do business with someone because of something that that is output by one.  

Kevin Frazier: You note expertly too, that it's especially in when you're considering, oh, well maybe I'll go to this one specific doctor, or maybe I'll go to this one specific lawyer.

If you're using generative AI to get an assessment of, you know, I want to know what are Professor Frazier's class rankings and what’s his crimes, what's his rap sheet, what's his background like? All it may take is that one generated response that says, Professor Frazier did X, Y, and Z for one student or one prospective student to say, huh, maybe I'm not gonna sign up for that class, or maybe I won't go to that school.

And so this publication question too is a really interesting one in terms of thinking about who the output is actually being shared with and what the actual response may be to that output.

Eugene Volokh: Right. So I actually don't think that the publication element of libel law is gonna be much of an issue here in, in situations where at least somebody else has run the query and has, has seen the output. And sometimes, of course, the AI companies may have logs of who has, who has run what queries.

So if I run a query and it says something about me and then I sue based on that saying this is all false. The, the defense says, well, wait a minute, only was output to you, you can't damage your own reputation with yourself, right?

Kevin Frazier: You've got a pretty good reputation. So, you know.

Eugene Volokh: Pardon?

Kevin Frazier: I said your reputation's pretty sterling at this point, so.

Eugene Volokh: Well, but it doesn't matter because whatever people, other people might believe about me, that's false. Presumably, I won't believe things about me that, that are false except in highly unusual circumstances, which the law does not focus on.

So, so, in that situation, publication requirement would be absent. But so long as other people are running this query and seeing this output then I think the publication requirement is present. Again, remember, it doesn't have to be broadcast to the world at once in the same form. If it's shared with a bunch of people, one at a time here and there, which by the way is the way websites are visited too, right?

They're, they're just shared with each individual user as the user goes there. That's enough for publication, even if it's just shared once with somebody passed along once to somebody other than the plaintiff, that is generally speaking enough for publication.

Section 230 also, I think will not be that much of a barrier to liability because remember it says no provider or user of a interactive computer service shall be treated as publisher or speaker of information provided by another information content provider. But the whole point of generative AI is that it's generative. It's that it's generated by the AI company’s products. So the lawsuit would be against the company for passing along information that's generated by itself.

So the premise of Section 230 is don't go after Facebook, go after whoever posted the thing on Facebook. Well, here, if it's ChatGPT, it's OpenAI that is posting the material to the user. So again, I think Section 230 would not be much of the barrier.

We can talk about some of the other things, but I think is really the, the issue here, it has to do with mental state. So remember, modern libel law, generally speaking, concerns itself heavily with the speaker's mental state. If the plaintiff, the person who allegedly was whose reputation was allegedly damaged, was public, official or public figure, the plaintiff has to prove that the defendant knew the state was false or was like, knew the statement was likely false, was reckless about that.

If the plaintiff is a private figure and can show actual loss, not just hypothetical, likely loss, but actual loss, then the plaintiff merely needs to show the defendant was negligent, was careless in its investigation. And actually, when it comes to speech on purely private matters, maybe there could even be strict liability, but let's bracket that that's, that's pretty rare for a variety of reasons. Well, what does it mean? Ask about the mental state of a computer program that has no men's, no, no mind, right? Mens rea guilty mind. Well, it has no mind, it can have no guilt. So what does that mean?

And I think the answer has to be that we look at the mental state of the organization that is, that is responsible for the platform that has that has created the code, and that is operating the code. Now, by the way, that's complicated because sometimes it could be quite different. Somebody, let's say, puts out a public domain, large language model, then other people are, are operating.

Interesting questions, let's bracket that for now, I talk about them in, in the article.

But if, let's say it's ChatGPT, it was created by OpenAI. It's being operated by OpenAI. If the question is no knowledge of recklessness, question is what does OpenAI know. Now at the beginning, presumably it knows nothing about particular individuals who are being who are being discussed as it were by software. It doesn't even know that. I mean, maybe you can guess somebody's gonna be asking about Donald Trump or Bill Gates, right? But he doesn't know what's being output about.

But let's say somebody says, look, your software is outputting material that's false about me. And I, I, I realize you didn't know that, but now you know, I told you. In fact, not only did I assert this, I actually sent along a printout, you can check it against your logs and I sent along supporting data that, that shows that this is just not true.

So lemme give you an example. There's a case spending, although probably it's going to end up being disposed of in arbitration because of an arbitration agreement where somebody named Jeffery Battle says, oh, Microsoft is outputting information about me that said that, that reports that I was, that that I was convicted of, of a serious felony and sentenced 18 years in federal prison.

And that's not me, it's another person with the same name. But it's linking the two of us together because the output begins by describing my actual current job. I'm an aerospace expert. And then says, however, Battle did these other things. And I can show you, there's a Wikipedia entry that that desc-, that obviously the answer was drawn from the first part of the answer, which describes me. There's another Wikipedia entry that describes somebody else with the same name, and the libel is in, in reporting that the two are the same person.

So open and shut. Not one of those, he said she said situations right. So at that point the company, in this case it was suing Microsoft would know that this is so, and would be able to do something about it. Now, apparently, untraining or retraining large language models to sort of tell them, stop saying this is, I'm told technically very difficult, but large language models aren't the only kind of software, right?

I think any of us could easily design software that says, okay, after output is generated look up any things that you can identify as names, and generally speaking there are algorithms that pretty reliably identify whether something's a person's name. Look them up in a list of known falsehoods that have been set up by the software about them.

And if indeed the name appears within the same sentence as felony or was the, the accusation was excuse me, the, the other Jeffery Battle was convicted of levying war on the United States. So if it appears within the same sentence as, as that, or same paragraph, then just don't produce this output. I mean, that's, that's not difficult code to write.

It's over and under inclusive. It won't catch everything and it may block things that, that are not false. But maybe that's what's called for if you are going to let out into the wild, the software that can generate potentially very harmful assertions, you'd need to have these kinds of controls there.

And in fact, I am told or I've seen news accounts that indeed sometimes if you if you put a person's name into, into particular a particular AI program it just refuses to give you an answer. And that's in a sense, a chilling effect, right? But the theory is better to be somewhat chilled there than to output something that you know, is false, and that, that it turns out it's been reported to you that this information that, that, that, that there has been false information written about the person.

So that may be a somewhat like 1.0 version of this kind of control mechanism. Presumably, you'd want to have something that is more kind of more, more carefully tailored.

What about negligence?

Kevin Frazier: Yep.

Eugene Volokh: Well, there it turns out that we have a decent amount of experience with negligence liability for machines and for software that's usually filtered through the law of a product liability and design defects.

Now, oversimplify here, but basically, if I am injured by a self-driving car, let's say Waymo. I'm walking down the street and a Waymo hits me I wouldn't sue the car, right, obviously, but I could sue Google, which runs Waymo on the theory that there was negligent design, that they, that the software didn't recognize me as a pedestrian and there was a better design that would've, would've prevented that.

And then of course there'd be a battle of the experts as to, well, would it be effective design or not? Not a great thing for lay jurors to decide, but that is the way our tort liability system works. So I think those are gonna be the complications. The question of ascribing mental state in these kinds of situations where the output immediately is created by a thing that has no mind, but is ultimately the responsibility of an entity that's populated, that's staffed by people who do have minds.

Kevin Frazier: Well, there's, there's a lot to unpack there. I wanna start quickly with just this 230 argument. You have said that you don't think 230 would apply. You outline a, a great case in your paper. What's the strongest argument you've heard for why Section 230 should apply to AI models and, and how would you refute that?

Eugene Volokh: Right. So I have to say I haven't heard any really persuasive argument as to why Section 230 by its terms does apply. I mean, some people have said, well, really large language models are all based on training data. So really you are holding them for information provided by the source of the training data.

But in most of these cases, the training data does not contain those false assertions. Right. If it's true, if the training data says Eugene Volokh was convicted of stealing from petty cash and that's why he was fired from UCLA, just to make clear that it's not so.

Kevin Frazier: The parents didn't kick you out for that reason. That’s good to know.

Eugene Volokh: Right, right. Amicable, amicable retirement from teaching. But let's say there is a training that it's false, but the, but the software, it sucks it up and then reoutputs it. That's the garbage in, garbage out scenario. Then it may be maybe immunity Section 230, but the problem with large language models, it, it's sometimes gold in, garbage out, right?

All the training data may be perfectly accurate, but the output is still false because it weirdly recombines words, not even recombine. I mean, in a sense, all output is recombining words that already exist, but it's responsible for how it puts the words together. That is the, that is its speech, so it would be held liable.

So, so that's a, so I don't think there's a statutory basis for, a statutory construction basis for saying Section 230 applies. There's a policy argument that basically is, look, we should have something like Section 230, maybe we should create a new Section 230, because we don't wanna have an undue chilling effect.

We don't want to deter the creation of the software and we don't wanna encourage it to over restrict the way that apparently has been doing in some measure again, by saying, look, we just won't answer any questions about a particular person. So, so we should have a new Section 230 that does that.

The problem is that we'd essentially be saying there's nobody who would be responsible for that and that if people's reputations are damaged, well, too bad for them. And you know, that's a possible policy decision. It's just, I'm not sure that it is a wise policy decision, especially since some of these companies are extremely wealthy.

They have the tools to try to make their software better. And to the extent that let's say they, it's even technically impossible to guarantee perfect safety, well then maybe the answer is they wouldn't be held to be negligent, if sort of a product design argument. And even if they are held liable for something, well, you know, that's the cost of doing business.

And they should factor it into their, into their financial analysis and that may encourage them to, to produce more reliable, more reliable output. It's again, it's in a sense like self-driving cars. Self-driving cars I think are a wonderful thing. I ride in Waymo's whenever I can. They're only available in some places, but I'm happy, happy to use them. But I don't think anybody says, well, in order to encourage the development of self-driving cars, we should make them categorically immune from any harm that they cause. So that's the argument.

Kevin Frazier: I think that in that instance, I would say every San Francisco resident hide your kids. But that's another conversation.

Eugene Volokh: Well the thing about self-driving cars is they're probably better for society because they're safer than humans.

Kevin Frazier: Oh, don't get-

Eugene Volokh: So we do want them, some measure, avoid undue discouragement of self-driving cars, but at the same time, I think the answer is to provide a sensible level of liability rather than giving them complete immunity.

Kevin Frazier: My short remark on that is anyone who's opposed to autonomous vehicles come drive in Miami and you'll become the most rapid supporter of AVs known to man. But that's another podcast we'll say for another day.

Eugene, another point you make is that a lot of these libelous outputs from models tend to be, quotes, tend to just be Professor Volek said quote in quotes, X, Y, and Z. And that's obviously just a slam dunk, easy libel case. So you create some innovative and very straightforward solutions to this quotation issue. Can you just walk through those-

Eugene Volokh: Sure

Kevin Frazier: Those mechanisms?

Eugene Volokh: Sure, so, I should say in 2023 when I wrote the, the article often these programs would output things in quotes, which is extra dangerous, right? Because quotes are sort of signals to, to us. I oversimplify here. There are scare quotes. There are quotes used in obvious fiction.

But generally speaking, in many contexts there are signals that essentially say we're actually reporting on something somebody else wrote, and that makes them extra hazardous. If I see a paraphrase, I might say, well, I need to check the source. If I see a quote, probably gonna be a little bit more likely to trust it.

But apparently what was happening is the software was just treating a quotation mark as any other kind of token. And if it predicts that the following token is gonna be a quote even, and then, then it just includes that, and then includes whatever the next token it thinks is, even if it never appeared in the training data without any attempt to verify the quotes, let's say doing a Google search, seeing if the quotes appear somewhere in that and such.

Now, in more, more recent years as I, in more recent months as I've been using the, the software, I've seen it I've seen a lot fewer quotes. Not none, I actually was just doing an experiment with a student of mine where I asked the, the, the I think it was, yeah, it was definitely ChatGPT-4.

I asked a legal question and it gave me actually the correct answer, citing the correct case. But giving a quote that did not appear in the case, so it was generating hallucinated quotes. So one possibility might be, one possible actually, let me step back.

When somebody says there's a design defect in in a product, usually again, I oversimplify, but usually what that means is that the product was negligently designed in that there was some relatively cheap precaution that could have been taken but wasn't taken. So in self-driving car, they could have just by adding this particular piece of piece of code, they could have recognized that this, this blob going across the field division was a pedestrian, let's say.

So, so, likewise what you're looking for in hypo in, or what you would be looking for if this issue came up in a, in a, in a real case involving large, what I call large libel models. Then what you're looking for is there a way that they could have diminished the risk of this harm?

And one possibility is have, have a code that says, do not output quotation marks unless the things between quotation marks appear somewhere, either in the training data or if you don't have access to the training data appear in some corpus. Maybe do a quick Google search and see if you can find them.

And if they don't, then just don't include the quotation marks because then you can't vouch for it for the accuracy. Again, there are complications what if it's quotation marks in fiction that, that the AI was asked to write?

But you know, one of the things that I think the AI companies will have to recognize if they make all these claims, oh, well, we just too complicated for, for us to, to implement these fixes, is they have to say yes, we can create software that performs the 90, 90th percentile on the SAT and on the bar exam and this and that. But checking to see if the quote actually exists somewhere, oh, too difficult, right? I just don't think that on the facts, the, the, the AI software developers will get away with that kind of work.

Kevin Frazier: You mentioned that you could foresee for the largest labs, this being a sort of cost of business of updating their software or updating their systems to make sure we're checking for these sorts of libelous statements. How do you respond to concerns that mapping this sort of requirement onto AI systems may quash AI innovation, may make it unduly burdensome for some?

Eugene Volokh: Right, it is a very serious concern and it is concern with any product, right? Any service as well. Medical malpractice liability may, may undermine possible innovation in medical practice because usually doing what everybody else is doing is likely to be seen as reasonable, whereas trying to do something better if, if things go badly, even if it's not really your fault, that may be seen as unreasonable.

But not your fault in the sense that you, you, you had really good reason for doing it, but the result was, in this particular instance, bad, very much it’s a very reasonable fear that your actions would be seen as unreasonable there. So, likewise with regard against self-driving cars. I think Tesla and Google can afford the risk of liability. But yeah, if somebody wants to create self-driving car kind of in his garage and sell it to people a lot more cheaply, let's say, than a Tesla is, is sold.

Well, I guess I'm not sure how full self-driving Teslas are, but certainly that's the goal and Google's Waymo is fully self-driving. Then, then in any, I'm sorry the risk of liability may deter this, this, this startup and not even just in the garage, if there's somebody is looking for, for investors, they may say, well, wait a minute. You know, we don't wanna invest all this money, and all of us will go, go to the lawyers and go, go to, to verdicts against you. Very serious concerns.

But on the other hand, it's also a serious concern if innovators are not held responsible for the harm that their innovative products cause because then they may just not act as safely as possible. Maybe in fact, I shouldn't be creating self-driving cars in my garage. Maybe I shouldn't be letting lose a, a language model that I know people will use to make decisions if the model just makes stuff up about people.

By the way you know, this, this issue has led to some statutory action in some contexts so medical malpractice, recoveries are capped in some states, and there's some procedural rules that, that are aimed at, not unduly deterring, deterring reasonable behavior. As I understand it, nuclear, nuclear power plants or nuclear power plant operators have their liability capped at some many hundreds of millions of dollars, but still have it capped in order to avoid deterrents of nuclear power.

Now that there are a lot of people are, a lot of environmentalists are not speaking out in favor of nuclear power because it's ultimately cleaner than the alternatives we might be seeing that that becoming an important protection again for new power plants.

But very rarely is the rule well, we so want to promote innovation we'll have no liability whatsoever, right? Usually if the legislature steps in, it tries to balance these concerns. Just like with Section 230, it didn't completely preclude defamation liability it just said it has to be on placed on the original speaker.

Well, if you are going to preclude liable liability, even for the original speaker, for the entity that's responsible for generating the output. Well, there need to be a, a legislative judgment, I think, along those lines, and probably either the legislature will say no. If anything, a lot of people think Section 230 itself already goes too far, I'm not sure that's right, but, but I think that's the sentiment among many.

But at the very least, that probably say, look, there's gotta be some sort of compensation, some sort of mechanism for protecting people, innocent third parties, whose reputation may be damaged and who may be economically ruined potentially as a result.

Kevin Frazier: And shifting our perspective to what's on the horizon. One subtle part of your paper touches on considerations of the use of open source models, so models being used by downstream developers. And I think one question I'm particularly keen to, to know how you're initially thinking about is the idea of AI agents.

So we can have agentic systems where it's an AI agent talking to another AI agent, talking to another AI agent, who then shares an output and posts that, let's say on your LinkedIn, and you never even thought about what it was going to post or when it was going to post it. So in these instances of multiple entities or individuals relying on multiple AI systems, how complicated is all this going to get?

Do we need to start thinking of wholesale reforms to our conception of libel? Or do you think that this pre-existing structure can be amended or adapted enough to fit this crazy technical world we're living in?

Eugene Volokh: Right, well, it's hard to know for sure among other things that we're, it's still early days at this point.

I know of two lawsuits that are being litigated in U.S. courts won the battle case, which again has been shunted off to arbitration. It’s in federal district court in Maryland, another one in state trial court in Georgia, where the where OpenAI's motion to dismiss was actually denied by the court. So the judge allowed the case to go forward, although it's still not a trial yet.

There's also a par a complaint that's been filed recently in Norway with the Norwegian Data Protection Authorities about libelous output accusing actually a Norwegian man of killing his own sons. The good news is everybody's alive and well. But the bad news is he's saying, look, you know, it's making up very, very serious allegations about me.

But still, it's only three such instances that I know of few plus a few others where the liability has been lawsuits have been threatened, but the only three filings that I know of, so probably it'll there won't be a lot of movement for mass, massive reform until we see some decisions there, at least we see how courts are handling this right now. I will say, as a general matter, our legal system is quite well acquainted with harms that are that stem from a combination of actions by many parties.

That's sort of a staple of first year tort law for those who have, who have taken it. Just remember a lot of times the lawsuit is, let's say some train causes somebody's, some vendor's cart to, to, well, actually I'm not sure, train the, the, let's just say some to get the train. Let's take some bus causes a, a, a vendor's cart to, to tip over.

And then there's, that doesn't damage the, the goods, but as a result, thieves come and steal the goods. To what extent is the bus operator responsible for, for the theft of the goods? Well, the answer is maybe. Even though it's a third party, it may be the negligence of one enabled, the, the intentional misconduct of another, and then you can multiply it further, especially when you get a product liability.

Historically, you know, there's been the, the seller, there's been the manufacturer, but the manufacturer may have bought parts from many other people, right? Could be a contractor and a bunch of subcontractors. So the legal system is familiar with that. It may be that it'll map the existing rules in a, in a way that doesn't make sense onto the, this new technology.

And if that's so then I think quite possibly Congress will step in or state legislatures in some situations will step in. But, but for now at least, I think the answer's going to be that it's that courts will be applying these familiar rules developed over centuries, having to do with liability of multiple causal factors as it were, parties that caused things in, you know, variety of different ways into a variety of different degrees that'll try to map them onto AI.

Kevin Frazier: And before we let you go, we have some pre-law students I'm sure who are watching this, we have folks who are decades out of law school who have maybe moved on to thinking about the black letter law, but are really involved in, in theory and policy.

What are some things that are top of mind for you that if you were to reach out to folks who are curious about diving deeper into these issues, what questions do you recommend they look into or some, some cases, or future trends that you think are particularly worthy of their attention?

Eugene Volokh: Yeah, you know, really hard to know, really hard to know. I did not anticipate in 2022, I did not anticipate what ChatGPT would be doing. It's very hard to predict what, what's coming down the pike.

Among other things there, there may very well end up being lawsuits over physical injuries as a result of AI. There's of course, a lawsuit pending right now involving a suicide of a, a teenager who was involved in chatting with AI and for whatever reason as a result of the output, the claim is committed suicide.

That, those kinds of cases are percolating up, especially when children are involved, usually pretty hard to hold an entity liable for someone's suicide. And in those kinds of cases, I'm actually pretty skeptical of liability, but courts are gonna have to deal with that.

And then on top of that, of course, or just the thing that one might be thinking about is what if there's other kinds of physical harm. For example, people follow the medical advice of an AI and that advice is provably false. Like there's a log that says you should do this and that and that, that clearly is not the right thing to do, and it was indeed what the person did. So, so the causation may be pretty straightforward. To what extent would there be that kind of responsibility?

So, and you are quite right that that the agent environment where lots of things are happening, like we let something loose and we think we know what's gonna happen, and it turns out the result is vastly broader, at the very least, vastly different than what we'd expected. The legal rules may end up being familiar. Is it careless? Was it for harm, foreseeable and such? But how they'll actually play out as a practical matter may maybe quite surprising. And again, because it's surprising difficult to predict.

Kevin Frazier: Well folks we're gonna have to let class out and allow you all to get to your homework, but for now, thank you so much, Eugene for joining the AI Summer School.

Eugene Volokh: Thank you so much for having me.

Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and BlueSky. And email us at scalinglaws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Eugene Volokh is the Thomas M. Siebel Senior Fellow at the Hoover Institution.
}

Subscribe to Lawfare