Cybersecurity & Tech

Lawfare Daily: Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI

Chinmayi Sharma, Catherine Sharkey, Bryan H. Choi, Katrina Geddes , Jen Patja
Thursday, December 26, 2024, 8:00 AM
Listen to a conference panel on AI liability. 

Published by The Lawfare Institute
in Cooperation With
Brookings

At a recent conference co-hosted by Lawfare and the Georgetown Institute for Law and Technology, Fordham law professor Chinny Sharma moderated a conversation on "Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI,” between NYU law professor Catherine Sharkey, Ohio State University law professor Bryan Choi, and NYU and Cornell Tech postdoctoral fellow Kat Geddes.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Katrina Geddes: Although I understand why copyright has been sort of the go to tool for decelerating the pace of AI innovation, it's not actually a great tool for addressing all of the ethical implications of this technology, and I don't think we should be trying to do that using copyright.

Alan Z. Rozenshtein: It's the Lawfare Podcast. I'm Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor and Research Director at Lawfare. Today we're bringing you a conversation from a conference on AI liability that Lawfare cohosted earlier this year with the Georgetown Institute for Law Technology.

Bryan Choi: On the liability point, in medicine we often worry that too much liability will cause doctors to practice defensive medicine, and that might be worse for patient care. But, you know, think about, would we want AI developers to practice defensive AI practices?

Alan Z. Rozenshtein: Fordham Law Professor Chinni Sharma moderated a conversation between NYU Law Professor Kathryn Sharkey, Ohio State University Law Professor Bryan Choi, and NYU and Cornell Tech Postdoctoral Fellow Kat Geddes on ‘Old Laws, New Tech: How Traditional Legal Doctrines Tackle AI.’

[Main Podcast]

Chinmayi Sharma: What are the harms? Is this actually a conversation that we need to be having? Is AI actually causing any harms on the ground? Is this kind of just premature and lawyers trying to have opportunities to talk more?

Bryan Choi: Well, I would say so my focus has started with cyber physical systems, right, like autonomous vehicles. That's where, like, before ChatGPT happened, we were all talking about self driving cars. There's cars, there's drones, there's, you know, airplanes, right? There's all kinds of software and AI being deployed in these cyber physical systems. So that's, I think, the easiest case for some kind of tort liability before you even start to get into sort of non physical harms.

If you can figure that part out, right, then you can maybe extend it to other types of harm and say, you know, do we also care about about a broader set of harms? But I think at least, you know, that core set should be non controversial, right? That there can be physical injuries caused by AIs.

Katrina Geddes: So, I mean, in the in the creative space the harms specifically are harms to existing creators.

So, a lot of you are probably familiar with the lawsuits that have been filed against the developers of generative AI models that produce images, text, video that closely resemble, imitate, sometimes substitute for existing work. So, you know, the New York Times is suing OpenAI because ChatGPT is capable of generating verbatim chunks of New York Times articles, and that's a problem if you're the New York Times.

So, so the concrete harm in that space that people are worried about is the displacing effect of these models on existing markets for existing creators. And I won't go into that in too much depth now, but that's sort of the biggest harm that folks are talking about.

Catherine Sharkey: It's hard to add on to but I think it's a great first question.

So I’ll put it this way. I mean, I think that to start where Bryan started lots of times actually depending on which body of law you start with, you jump immediately to a certain type of harm. So if you're a torts, products liability professor, I think you start to think first and foremost about physical injuries and property damage.

And, you know, one more concrete example is many people don't realize this, but the FDA, for example, has approved about 700 AI enabled medical devices. So if you're interested in regulation of, you know, medical products, you kind of have to start thinking about what happens with the component or part of the product that gets put in.

If you are in the creative space, you're going to be thinking about protecting, you know, through intellectual property law and etc, the kind of creative process.

I think the only thing that I'll add to that is that in each of these domains, so I'll speak about torts and products, the introduction of AI challenges you to think about, so for example, in tort law, we have historically put physical injury and property damage on one side and economic losses on another. And, but yet we think about things like privacy interest as being protected or reputational interest through defamation.

So there are definitely things that have historic protection, even though they're not physical injuries or, you know, or property damage. So the AI harm question is a really interesting one and intriguing one because it makes you think about why is it that certain types of harms have been prioritized? What's the functional reason why and how might AI change that?

Chinmayi Sharma: Thank you guys. So you actually perfectly teed up my next question, which was, on one hand, tort law has been relatively flexible in that it has identified or given more life to new harms that might not have existed way back in the day, the origins of tort law. However, on the other hand, it's quite slow to do that, and it can take a long time for new harms to be recognized.

So with that lens, Bryan and Cathy, you write about products liability, which focuses on defects in products, and Bryan, you write about negligence, which focuses on the unreasonableness of a developer or user's behavior. Do you mind talking a little bit about each approach and how it differs from the other, and then maybe other examples like strict liability? And then, kind of, because these are opinionated primers. Why do you think that your approach is better at handling the AI problem than alternatives?

Bryan Choi: Well, actually, I mean, so Cathy and I have talked about this a number of times, and I think we have, we share more in common than we disagree. And in the sense that we both want some kind of accountability, right? We both want there to be some kind of standard that you're holding, either whether AI is a service or a product, and if it causes harm that you want to have some way for courts to intervene.

So in that sense, I mean, I share, you know, you ask what's better. I think, you know, there's negligence. So even if you were to talk about traditional product litigation, where you have negligent design lawsuits, you have strict product liability lawsuits, right? You can bring those as overlapping claims. And so the real question is just like, how do they differ?

And there's some, I think, procedural differences. There's some historical differences in the way the doctrine has evolved, like, supply chain, a component manufacturer liability or manufacturing defects. But the core of it, it ultimately comes down to some kind of cost benefit balancing of, well, was it reasonable, right? And so both strict products liability and negligence use that word reasonableness.

And I think, again, what Cathy alluded to earlier, just now, you know, should it be just reasonableness with respect to physical injuries and property damage, or should it be reasonableness with respect to a broader set of harms? And I think the AI question, absolutely, right? It does kind of, and more generally data analytics or privacy or, you know, it raises those questions of maybe we should be thinking about, again, expanding that set of interests and you start to see the courts recognizing and pushing in that direction.

Catherine Sharkey: I purposefully had Bryan start to see if we were gonna agree or disagree. But I would agree that at the core each of us is arguing for the power of that common law, whether it's done through negligence based liability or under a products liability framework would work to induce, whether it's manufacturers, developers, distributors, people on down the chain to take safety precautions.

And so, and he alluded to this in products liability. This is kind of controversial ground in some sense. But strict products liability when it first came into being. in the mid 1960s was focused primarily, the set of cases that it was developed for, were basically manufacturing defect cases, construction defect cases. The classic, you all probably read the Escola case, right, the Coca Cola bottle that explodes in your face. So construction defects are things that are not made according to their blueprint.

But there's been a dramatic expansion, since the mid 20th century to today of design defect and failure to warn claims. And almost the majority test for design defect or failure to warn is some form of a risk utility test. So I call it a negligence inflected test. It's looking at the costs and benefits, not of an individual's behavior, but of features of the product.

So there's some subtle differences about you focus on the features of the product versus decisions by individuals. But if you take that as a given, then the, you know, light between Bryan and my position on the standard of liability becomes very small because I'm proposing a negligence inflected test. He's a huge fan of negligence and reasonableness.

I'll say three quick things, they're in what I wrote for today, and I'll try to be very succinct. I prefer, so to get now, so we can agree, and now I'll try to assert myself, at least to be provocative. I vastly, I'll overstate, I adamantly prefer products liability framing versus negligence framing for three reasons.

One, if you look at the historical evolution, products liability didn't exist in the 19th century, right? It was all in contract. You had to have a one-on-one privity relationship. The history and evolution of products liability from when it first came in as negligence based liability against a remote manufacturer of a car in the famous Buick v. MacPherson test, to how it morphed to strict liability standard with the exploding Coca Cola bottle and the argument that mass production of goods means that we have to have this strict liability consumer expectations test to the dramatic expansion where the courts moved to negligence inflected tests.

We have kind of a historical track record for liability framing to morph and change based on the scale of risk that's being proposed to society. If you go and you read these cases, right, why do we usher in products liability? Because the automobile is a thing of danger that's, you know, subjecting individuals to risk of life and limb.

Why do we usher in strict products liability? Because mass production and consumption of goods is very different from one on one handicraft. So I have argued that we now are in a transformational, technological age, the digital information economy. And similarly, society is facing these heightened scale of risk. So products liability to me is better, a better framework because of that.

And then two other quick things. One, the second one is with products liability, we have a ready framework, too, when we think about products like medical products I alluded to before that pose really serious potential danger. We have this interplay between regulation and liability. You can't, you can have it with negligence, we just have kind of like a track record of how products that expose society to these types of risks can interface in this kind of nice web back and forth, kind of feedback loop between the two.

And then the final is that, and it's related, is we have a lot of examples where we can subject something to products liability in kind of transition before we know exactly how we would regulate in that area, before we have the precise information to know what optimal regulation would look like. And to me, that seems to fit very much where we are with AI.

There's enormous benefits. There's potentially enormous risks. There's not a well developed kind of, here's how we would optimally regulate. And so we could have this running as our kind of transitional mechanism to produce information about what the best way to regulate would be. So for those reasons, I want to be team products liability.

Bryan Choi: Can I be provocative since Cathy's been provocative?

Chinmayi Sharma: I'm glad. I asked a provocative question and you started on such a nice foot. I want to see the distance here. Advocate for the negligence.

Bryan Choi: So I would say a couple of things in response. One, so there's I think Lahav has a interesting piece out about revisionist history of products liability. And so we have focused on the automobile and mass manufacturing in the 20th century as the story of strict products liability.

And she says, actually, we've got to go back further, right? Food manufacturing, tainted food goods, was actually the progenitor, right? And we know this from Prosser's notes but there were cases bringing ne-, I want to say recognizing negligence claims against food manufacturers and distributors long before Buick and before Escola, right? And Prosser took those food cases and turned it into what we now call, or what he called strict products liability.

But negligence was capable of doing all of those things, like, you know, 50 years before that. The story of strict products liability is a story about a particular moment in time in the 1950s and ‘60s when we thought strict liability, since you asked about strict liability, thought that strict liability was the future, right? Enterprise liability was going to be the future. And all the way through the ‘70s, people thought that was going to happen. It was going to continue to expand to be some form of enterprise or strict liability. And that hasn't happened, right?

So when products liability first came out, it was, well, maybe we shouldn't care about foreseeability of the harm, right? Any harm, no matter what, if it's caused by a product, right, it's going to, it should be compensated by the distributor or manufacturer of that product. And if, whether you could foresee the injury or not, that doesn't matter because we, all we care about is compensating the victim.

That turned out to not work very well, right? So courts kind of pulled back and said, no, actually we do care about foreseeability of harm. And so in those ways, actually the gap between price liability and negligence has narrowed. And so the question is, what is the remaining value of product's liability? This experiment from the mid 20th century with strict liability or enterprise liability, how much of that actually continues to be useful in the AI space?

And one way in which I say it might be a distraction is because AI folks love to say, it's the technology. AI is autonomous. It's working on its own. There's no human in the box, right? There's no golem inside. No one, don't worry about it. Even though we know in actual practical implementation there's always a human team behind the scene.

There's always a person in the loop, right? There's some, someone monitoring. That's the only way they can get AI to really work, you know, on the edge cases. And so I think it's a distract, it can be a distraction to say focus only on the technical features.

Chinmayi Sharma: I love it. We got to some disagreement. So kind of moving to Katrina, I, perhaps surprisingly to me as someone ignorant of intellectual property law, but maybe unsurprisingly to people who focus on intellectual property law, I feel like IP law came out swinging as generative AI hit the scene with being some of the first types of doctrine that was being applied in actual lawsuits against generative AI developers. Do you mind talking a little bit about like the origin of these cases, what's currently going on in litigation, and what kind of harms they're addressing?

Katrina Geddes: So, there's a lot to unpack here. So, the, so the harms that are occurring in the creative space are like very visible. So, you know, if you're a visual artist and a generative AI model has been trained on your work and it's able to produce work that looks a lot like your work, then it's really scary, right? You're really worried about losing your job. And so a lot of artists creatives filed copyright lawsuits really early.

But what I will say is that copyright is not, like, a general purpose policy tool. A lot of people try to use it for non copyright purposes because, for a few reasons. So, everything is copyrighted, right? Like, all of the photos that you've taken on your phone, all of the emails that you've ever sent, they're all protected by copyright automatically. Obviously, if you want to sue for infringement, you have to register, but that's a separate thing. Everything is copyrighted. So basically everything is suable.

The other reason is that copyright has really powerful remedies, right? So you can get injunctive relief, you can force people to take things down. Statutory damages are very high. To the extent that courts ultimately decide that, AI training is not fair use, then if you consider the billions of works that are used to train these models and the statutory damages that could attach to each of them, like the damages are potentially crippling to any company, maybe not OpenAI, but many companies.

And so, so there's a reason that copyright is often the sort of go to tool for taking content down because of these things that I've described. When I say it's not a general purpose policy tool, I mean that there are so many scary things about AI, not just the potential displacing effect on existing creatives, but also like really fundamental questions that we have to ask ourselves about, like, what kind of creativity do we value? What kind of expression do we value? What kind of society do we want to have? Do we want a world where everything we see online is AI generated?

Yes or no, and there are reasonable minds that disagree about it. But copyright is a narrow tool. It protects original expression in creative works. But it cannot resolve these fundamental social questions about the value of human creativity and what kind of creativity we want to protect.

And so, you know, asking the courts to make these very narrow decisions about, is unlicensed training fair use? Is a substantially similar output infringing? Like, those are narrow questions, but copyright cannot answer these broader sort of social and cultural questions about what kind of markets in creative expression we want to have.

So that's what I mean when I say that, like, although I understand why copyright has been sort of the go to tool for decelerating the pace of AI innovation, it's not actually a great tool for addressing all of the ethical implications of this technology. And I don't think we should be trying to do that using copyright.

Chinmayi Sharma: You kind of foreshadowed my next question, but to drive it home a little bit more, to illustrate kind of the point that you're making about copyright not being a general purpose tool, but it being used for things beyond what it was traditionally intended to protect.

There have been a lot of conversations about using copyright to address things like deep fakes or the distribution of non consensual intimate images. Can you talk through, like, what is the theory there? How are they trying to shoehorn these cases into a copyright framework?

Katrina Geddes: Yeah, so, so the reason, so again, the reason that copyright is sort of the tool that people instinctively reach for is because, as I said, everything is copyrighted, it has great remedies.

With harmful content that we want to take down, there is a long history of copyright owners using copyright in that expression to take it down. So, you know, if someone creates, if someone takes a really, like, unflattering image of me and they post it online, and I want to take it down, maybe I think that I can because I'm the subject of the photograph. But if I didn't take the photograph I don't own the copyright in the photograph.

So people have talked about using copyright to remove things like nonconsensual intimate imagery there's a whole body of scholarship on this. Amanda Levendowski, for example, has done amazing work in this space. I'm not saying that's not an option with deepfakes, but it again, copyright doesn't seem like the right tool for that.

Like strictly doctrinally speaking, if someone takes copyrighted content like images or photographs or videos and then turns that into a deepfake, like regardless of sort of the social and ethical implications of creating a deepfake, that's probably transformative fair use.

So I can think of a lot of artistic and other socially beneficial deepfakes that people probably don't want to use copyright to take down. But if we use copyright to take down deepfakes, then we have to take down all of the deepfakes, including the good ones. So some of you may be familiar with deepfake Tom Cruise that's generated a lot of followers on TikTok.

I mean, the harms associated with that are pretty minimal, like I really enjoy them. But if I wanted to use copyright to take them down, then I would have to, like, basically shift doctrine and say, well, you know, taking copyrighted images and videos of Tom Cruise and then turning them into a deepfake is not transformative fair use, and it probably is.

So I think that the legislation that has been proposed to specifically address, like, really harmful deepfakes, like, and I'm thinking of sexually explicit deepfakes, nonconsensual de fakes. I think that legislation that's specifically addressed to taking down those deepfakes is really important and I support those and I don't think that copyright should be used for this purpose because I think it would, I think it would distort the doctrine in a way that the broader ramifications of which I don't think we would actually be on board with.

Chinmayi Sharma: So I love that because I feel like lawyers or academics often get the bad reputation and maybe the deserved reputation of being very abstract or focused on doctrine or theoretical underpinnings of law as opposed to the reality of what happens when you, for example, use a traditional area of law to use cases that it was not envisioned for?

So I kind of wanted to ask, starting with you, Katrina, and then if Bryan and Cathy, if you have thoughts on this. There's a, you used the term distraction, Bryan. Is there any downside to right now focusing on, well, in the short term, I want to use this theory of liability to hold AI accountable?

There's like a zero-sum game in terms of attention. Maybe instead, if we couldn't use these doctrines, we would focus on more comprehensive substantive regulation. Maybe we would force courts to move into like a different area, create the new design defect that addresses something like this.

How would you think about the trade off of in the short term, we can maybe hold accountable AI now under these traditional doctrines versus the long run, maybe using this both as distorting effects on the doctrine and kind of gives us a get out of jail free card on not actually addressing meaningful substantive regulation that would get at all of the policy issues?

Katrina Geddes: Yeah, so it's a good question. I think, so I understand why people want to use copyright to take down content that they're unhappy with, AI generated content in the short term.

I think it's going to take a really long time for courts to decide. Right, so if the Google Books litigation is anything to go by, it may take a decade before we get, and I mean like a definitive answer from the courts about whether unlicensed training is fair use.

And so in the meantime, think people are going to do other things, right? So, copyright owners are signing licensing agreements with AI vendors to make sure that regardless of what the courts say, they get a slice of the pie. There are artists who are using technical tools to protect their works from unauthorized scraping, so like data poisoning attacks or watermarking. So in the meantime, people are going to turn to short term remedies.

I think over the long term, I want courts to give us an answer. I don't think platforms ultimately should have sole discretion about what is infringing use and what is not. But I think over the long term, probably copyright will only be able to answer, like, a very small fraction of the questions we actually want to ask ourselves in terms of how is generative AI going to contribute to sort of the marketplace of ideas and the kind of cultural expression that we value.

Bryan Choi: So I heard, I do want to respond. So I heard two questions in your, so one is there a substitution effect, right? If you focus on litigation, is it going to have some kind of, you know, downside for other forms of regulation. And the second question you said is, what are the unintended effects of, you know, misapplying a liability theory, right?

On the first question, I think actually it's not a substitution effect. It's a complementary or boosting effect, right? Oftentimes the court makes a decision and the Congress says, oh my gosh, how could you have made that decision? We have to now pass legislation to undo that, that court decision. Or it can spur, like it says, you know, oh, like you, you know, oh, the courts are recognizing this cause of action. Let's now make that a piece of legislation because we think that's a good idea.

So I think there's a, you know, Kathy's written a lot about this, right? The feedback loop between, well, she's, you know, agencies and courts, but also between Congress, right, the institutional bodies. I think that's like super important. And sometimes courts are faster at moving than legislatures, right? So I, you know, I think it's a false dichotomy to say that one, you know, focusing on courts will somehow prevent us from getting action in other domains. I mean, the revenge porn statue, that's been going for, you know, a while now, right?

Okay, so the second question is misapplying a liability theory, and here I'm going to come back to strict products liability, which is, you know, this, you know, unintended effects, right? In the ‘50s, ‘60s, we thought we could just throw liability on the manufacturer. Like, strict liability, that's fine, nothing's gonna bad, it's gonna happen, and we had these sympathetic plaintiffs, right?

Victims who are being hurt by exploding soda bottles, of course they should get compensated. Let's just, you know, give them money. And then in the 70s we had a liability insurance crisis and a recession and other you know, sort of cash flow problems. And so the court said, well, actually maybe that wasn't such a great idea. Maybe we should pull back on these theories. We don't have infinite buckets of money, right?

So, you know, Kathy again has written about how insurance infuses the tort system, right? That it's like a critical component of how the tort system operates. And so I think we have to be mindful of, yeah, like, you know, just saying there's a sympathetic victim, therefore, we should have a liability theory no matter how, you know, contorted it is that probably will have downstream consequences.

Catherine Sharkey: I think that the power of the common law is very strong, and I think that people sometimes only recognize it in times of regulatory failure or regulatory inaction. It's very interesting to see people, for example, who are interested in regulating all sorts of areas of health and safety, who have not a whit of interest in the common law, when suddenly their regulators aren't doing anything, that they become really intrigued, right, with the common law.

Like Bryan, I actually think there's a complementarity and sometimes the common law is a good place to look for where there's regulatory failure in action. But I think the power of it, if you look especially historically, is even greater than that. And Bryan alluded to there are multiple instances where the common law actually did a very good job as I've argued in sort of transition surfacing the need for either legislation or regulation. And better legislation or regulation that happens after a period of this transition than would have happened at the outset if we're worried about misfiring.

Bryan talked about a slightly different, I don't want to get down into the weeds, view of the evolution of part products liability. And I, but I would agree with him in one thing. Negligence is the road not taken. And because it was the road not taken in products, which then morphed to a risk utility, negligence inflected test, the reason I would prefer that framework is we just have a body of evidence to look at how we can combine ex-ante regulators like the FDA with products liability.

But we'll leave our, you know, in the weeds fighting maybe aside for a minute to make the broader point that. I don't believe that it's a distraction. It's sort of like that we have some idea of how we would optimally regulate, and if we would just think harder about it today, we should regulate in the optimal way. We have dramatic uncertainty, dramatic uncertainty.

Take a different example, fracking, right? There was a time some years ago where we had no idea whether, what the costs and benefits would be. I know there's like now political arguments going on, I don't want to be too controversial. But the idea is if we didn't have tort liability to be down there and surfacing, you know, the actual realized harms that were happening and people coming forward in litigation, thus giving us some information about the scale of risks and benefits, we would have no idea how to sensibly regulate.

One final example, think about online platforms, right? Amazon, historically, was saying they were not a seller of products because they never transferred legal title. In fact, their whole business model was designed, even when they take products and put it in their Amazon warehouses. They never accept legal title. So no one was regulating them and actually in the courts for a long time they were winning on these arguments. We can't apply products liability because they're not a seller.

And then the dam broke and actually courts are now in, you know, there's still division, but the California courts led the way of saying for functional reasons, it's not just transfer of legal title. That was like a historic proxy. And we have to get over that and think about what it is that, why we hold someone liable as a seller and started imposing liability. And then lo and behold, there's all sorts of like legislative, proposed bills or regulators, right?

That's just the way–I can't think, I can think of a lot of examples that go with that way. And I can't really think of a single example where like the common law working in this transitional space crowded out, you know the energy that like we would get. And they do work hand in hand.

I mean it we would probably get there faster if we had a lot more, you know governmental money going into like independent research on risks and benefits. I'm not saying that the tort system is like perfect or the only way to go, but we have a lot of historical examples that at least give me confidence.

The comment that I'll make, it's interesting hearing Katrina about copyright. So sometimes, for example, people would say, and I, you know, similar things about products is not good because a seller, is someone who transfers legal title. Or product isn't good because a product is a physical static thing.

I actually like to get quite theoretical and conceptual and look at why, right? It's not the thingness, the static physical thing that meant we should apply this framework. Nor, right, as courts now are telling us, is it the fact of transfer of legal title that meant you're a seller. You have to look to the underlying reasons.

And in that way, at least in the tort law space, you know, the courts can be quite flexible and adapt and morph. And they can actually, you know, as I said before, surface kind of information about risks and benefits in a way that otherwise we're going to be at a loss to how to do this optimal regulation.

Chinmayi Sharma: That was great. And I think the three takeaways I got from that was when we think about using doctrine for instances it was not originally intended for, we have to think about so we want it to apply to all the potential cases that it could be applied for. Take down bad deepfakes, maybe all deepfakes.

I think there is a point about the reinforcing nature of regulation and the common law and the information forcing mechanism that the common law can have. Whether it is products liability or negligence, it can actually be more flexible and unearth places where regulation should step in and reinforce what common law is already doing.

And then the last thing, which I think is often forgotten in these conversations, is the common law has a massive body of law, so it kind of matters whether you decide to go with negligence or products liability to some degree because you're importing this whole history of doctrine and the kinds of cases that are applied and the way they've resolved issues. And so kind of looking at which one do you feel more compelled by or which one do you think there is, those analogies are more effective really matters.

So to kind of specifically an aspect of negligence, Bryan, you write about professionalism and holding AI developers accountable to a customary standard of care in negligence lawsuits. And so basically treating AI developers like we would treat doctors or lawyers in the court context. So do you mind talking a little bit more about that?

And then do you think that might be a useful way to get at some of these more amorphous harms that traditional negligence or like a focus on products or outcomes might have a harder time getting at because of the focus on physical harm or economic harm.

Bryan Choi: Yeah, I appreciate the question. And so Chiny's written about this as well, AI's Hippocratic Oath, and so you should go check out her work on professional licensing of AI engineers.

You know, so Cathy alluded earlier, there's massive uncertainties in this area, and I agree. And because of those massive uncertainties, I think it's not just about what kinds of injuries could happen or what the risk benefit is, I think it extends to what are the best practices, what are the, or even the standard practices that AI developers should be following.

And if in fact there is that kind of uncertainty and it's not resolvable by some kind of, you know, collective scientific community then I think that's what calls for an alternate standard like the customary care standard.

So medical, so the sort of canonical place where that this standard is applied is medicine. Right? The practice of medicine, there's just a lot of uncertainties. There are things that we kind of know, but also there's just a whole lot of things that we're like, we have no idea why this works. We don't know the pathways. Each patient is a little bit different. So we'll like try a recommended course of treatment for two weeks and then if it works, great. But if it doesn't, we'll try the next thing, right?

And you have communities of physicians or healthcare providers that have just disagreeing philosophies about how, the best way to treat. So this goes back to, again, the 19th century, right? There's like, alternate medicine, there's homeopathy, there's traditional medicine like bloodletting there's a whole division of sects of medicine that disagreed fundamentally.

And so here too, I mean, you can make that, draw that analogy to AI, right? That there are, you know, maybe differences in opinion about what the best way to provide guardrails, or the best way to train, or the best you know, is, are transformers gonna be the way, you know, for all models, or just some models, or how much do you iterate?

I mean, there are these kinds of question marks that I think until they're resolved, we might be better off saying, you know, let's wait and see and figure out where the consensus or the areas of consensus are, where the areas of consensus are not.

And that at least in my, again, my sort of interest in adopting that standard is not to take a hands off approach but to allow courts to use that as an entry point to say, we are not going to sit back and say, you know, we can't adjudicate these cases. No, we can because we're going to apply this customary care standard and that's a way to get law involved.

Chinmayi Sharma: Cathy, do you have a response? Because I know that we've had conversations in the past about the using the customary care standard versus products liability to get at these same issues.

Catherine Sharkey: Yeah, I want to, so I'm going to go I'm going to now be more conciliatory and then I'll try to, so I think because Bryan made this point earlier, I just want to echo it, that a point of agreement actually is the power of attaching a form of tort liability. And one of the reasons that I think we both share is actually this interplay between tort and insurance.

So there's also a whole idea about how when courts, when they're facing new risks, so taken like the data breach context, for a long time because they were conceiving of the harm as maybe purely economic losses, there was a thought that there'd be no duty of care to impose whether it's under negligence or under some form of products liability.

And once there's a thread of that, so when some court started saying there is a duty of care, then there was a lot more interest in not just first party insurance, right? First party insurance is stuff like the people who are worried about a data breach are going to be worried about their own business's losses and getting like a forensic team and maybe even lawyers involved with data breach notification.

Third-party liability insurance you only need if there's a threat of liability, right? So the component of third-party liability insurance works in a way that I think many people don't appreciate enough. So it works not just to spread losses, but it works as a form of risk management. It varies across different, you know, areas of the law, how well it serves that function.

But one thing insurance can do is aggregate information, right. And they can actually come up with the kind of protocols and standards along the lines of what Bryan's suggesting might be a kind of, you know, it could feed the idea of what's customary. They could come up with protocols that they're going to insist upon then in sort of the data breach area that people use, you know, certain safety mechanism. So if the idea is how are we going to search for information about how to prevent or mitigate harms, we should really be thinking about the power of insurance, liability insurance in that realm.

So now though, maybe to sound a disagreement, again, I don't want to do so too forcefully here. There's a, there's an interesting analogy that I have in mind. So when a doctor is using a medical device, even an AI enabled medical device, versus a movie theater that like serves popcorn, right? The courts have wrestled with this question, have said, well, you don't sue the doctor for products liability. You sue the doctor for malpractice and how they might have used this. And you sue the product manufacturer.

So first point is like, they're not mutually exclusive. And there are reasons why you would want to have both forms of action, one against the doctor for malpractice, one against the manufacturer. Whereas at the movie theater, the courts have been much more willing to think about the idea that you can be, you can hold the theater liable because they're doing movie, you know, they're filming the movie to you and that's separate and apart from they're selling you something over there.

So they're intriguing questions. In some sense, whether you start down Bryan's road or my road, I feel like you're going to get to some conceptual difficulties. that are going to face us no matter where we start. So one is this products versus services kind of divide. Another will be professionals.

Bryan has, you know, knows about this and alluded to it in his writing, but hasn't mentioned it here. But even in the restatement on liability for economic harms that basically says as between contracting parties you can't sue in tort for purely financial losses arising from the performance or negotiation of the contract. There's an exception for professionals and of course doctors and accountants are professionals.

Everyone else, you know, the courts are struggling, our engineers, our architects. Who's a professional? Because it matters, because they can be sued, not only in contract, but tort. So, there's a big, you know, you have to answer the core, conceptual, theoretical question. Why? Why does it matter? And what's at stake in terms of this?

And the final note I'll just say is just it is interesting to listen to judges try to figure out who is and isn't a professional. They come up with criteria like, you know, do you have to get a license, do you have to go to a lot of training? And then suddenly when the example of a masseuse gets put forward, they have to get licensed, there's lots of training. Oh no! We don't mean that! So what do you mean?

You know, there's like an intriguing question of why professionals have this kind of carve out and what professionalization means and aren't. We have well developed bodies of have always fit that bill and for, you know, AI developers, et cetera, you know, you'd have to answer the question of, you know, why would you want to treat them different from all other providers of services that might lead to harms?

Bryan Choi: Can I add just one quick supportive comment, which is on the liability point in medicine, we often worry that too much liability will cause doctors to practice defensive medicine and that might be worse for patient care. But you know, think about would we want AI developers to practice defensive AI practices, right? Maybe that's not such a, it maybe has a different valence. And so I just wanted to kind of add that point.

Chinmayi Sharma: I think some in the crowd are also cyber security folk, but I think the point about the role that standard of care, tort liability, contracts, and insurance all play is very complicated. And in the cybersecurity context, we've actually seen that insurance companies have stopped insuring certain companies, and we've seen that tort liability or regulatory action hasn't been enough to change company practices to make them more secure. So it is just like a real finesse of how exactly you apply these things.

So that kind of gets me to my next question, which is one of the hardest things is the supply chain question in a lot of areas of law negligence and products liability and copyright and all of your primers talk about, well, liability would attach differently depending on who you are in the AI supply chain.

So you have like, your AI developers and like even there you could be your foundation model developer. You can be a fine tuner. You have AI deployers, so you have companies that are like I'm using AI in the provision of my products or services. And you have AI end users, or the services or products that I am receiving is the AI.

Do you guys mind talking about kind of how negligence, IP, and products liability would treat those parties differently and where it might hit stumbling blocks there?

Katrina Geddes: Sure, so, I can't answer this question without pointing everyone to James Grimmelmann's, I think Paul you described it as his magnum opus. He, so he, Katherine Lee, and Feder Cooper have a phenomenal article where they detail very comprehensively all of the liability issues associated with intellectual property within the generative AI supply chain.

Don't be put off by the length of the paper. It is worth it. It is like 100 percent worth it to read it from cover to cover. I think I've done that twice now, so I highly recommend that. So that's the first disclaimer. What I will say, because it's hard to summarize sort of the liability attaching to each component in the supply chain in a brief answer, is that it is important, and it will be important, and we'll see this unfold in the courts, to like break down the generative AI supply chain into different chunks.

So you have the person who creates, I don't want to say person, entity, the entity that creates the training data set. The entity that trains the model. The entity that fine tunes the model. The entity that like aligns the model. The entity that deploys the model. And then at some point you have the end user. And how liability will flow from the actions of each party towards both the like the issue of are AI inputs infringing and then the separate issue of are AI outputs infringing is like a very difficult and complicated question and it will take a long time to see how that shakes out in the courts.

What I will say from like a risk averse perspective is like you should probably get a license for training data. Also, if you're creating a training data set, you should think about, like, who's going to use that data, right? So if you feed that data to a model that you know is going to be used for infringing purposes, maybe don't do that.

So, so, sort of, you know, in, in line with these discussions that we've been having about like reasonableness and standard of care, if you are contributing to a supply chain that will ultimately end up with a generative AI model like Midjourney or ChatGPT. You should probably be thinking about how your contributions will have downstream effects on what users are capable of producing and whether models will generate memorized outputs.

And by memorized, I just mean verbatim or near verbatim copies of training data. So all of this stuff matters. So, you know, just because you're up, you're further upstream in the supply chain, you know, you've created a training data set and then you release it online. You say, I washed my hands of it. The courts will probably say that you cannot and you have to think about, like, how your decisions, even as upstream as you are, will affect parties downstream.

And I say this because copyright owners and the ones that are suing tend to be deep pocketed, and they will continue fighting you in the courts until they get the answer that they want. So, so all of this is to say, be careful. Probably get a license for your training data. The biggest issue, well, one of the big issues that's sort of playing out within IP scholarship at the moment is whether or not training is fair use.

There is a theory of non expressive fair use that was developed by Matt Sag that people are disagreeing about whether or not it applies to generative AI. So, if you think about something like a search engine, right, like a search engine in order to index web pages has to make copies of them. Those copies are not authorized.

Does that mean that the search engine like Google has engaged in infringement? No, because, when it makes those copies of the webpages, it does so in order to generate metadata about them, to index them, to process effective search results. It doesn't make those copies in order to communicate the expressive content of those webpages to a new audience.

So similarly in the context of generative AI, if a model is learning from its training data how to produce content that looks similar, people have argued that well, you know, these copies that the model makes in order to learn, those copies never see the light of day, they're contained within the belly of the machine, that's clearly non expressive for use.

But the thing is that, you know, we know, because there are plenty of examples of this, that models are capable of generating verbatim copies of training data, which means that the copies the model makes within the training process are being shown to a new audience, like you see them in the outputs oftentimes. So, whether or not this theory applies is, again, a matter of, like, deep contestation. We won't know for a long time whether or not it's fair use, but in the meantime, if you want to avoid liability, be careful.

Chinmayi Sharma: Wise words.

Catherine Sharkey: I want to not surprisingly argue that, you know, products liability is kind of a mature doctrine that has faced all sorts of difficult issues. So it is not the case. Sometimes I talk to people and I understand one of the great things about conferences like this and other things. It's bringing people, non torts, non products liability people into my world, not administrative liability. It's bringing me into a broader world with other scholars.

But I think sometimes people have this idea that products liability means you're a manufacturer, et cetera. You test, you design, you put something into the stream of commerce, and then you're good. And in fact, you know, all of the wealth of cases about failure to warn under products liability are all about how you have post sale duties to warn, and under what situations, how there are continuing duties that you have to, when new risk evidence comes to life, that you have to do,comes to light, you have to do something about that, et cetera.

There's all sorts of, I'll talk about, there's a case before the U.S. Supreme Court, it's kind of exciting tort cases don't often get to the U.S. Supreme Court, but this one was under admiralty. So, but the basic idea was someone made a bear manufacturer, bare metal turbine, right? That's what they manufactured. And then down the road someone put in asbestos laden gaskets and the court had the U.S. Supreme Court had to decide, does the original manufacturer of the bare metal have some duty to warn on the basis of the dangerous thing that got inserted down the road by someone else.

And to my, of course, to my elation, both the majority and the dissent in that case applied like a cheapest cost avoider framework. You might have heard about Guido Calabresi's view that in tort law we try to impose liability on the party that could most readily have averted or mitigated the harm.

So each the majority and the dissent both do that. They come to differing conclusions. So on the one hand, you think, oh great, just what academics love. Come up with some theoretical framework that in practice could come out either way. But to me, that's the power of it, right?

Because in different types of situations with different things happening all along the chain, you can't always say that the cheapest cost avoider was the person fine tuning here, or the cheapest cost avoider was up there. But in a fact specific dispute, you can ask a court, and if that's your framework, what the majority in that case decided is there was a duty to warn.

That actually that bare metal manufacturer was the entity that had the best information about how this was being used down the road and how they could have taken measures to avert that kind of harm. So to me, that's very powerful and it suggests that in this world with all of this complication, we kind of have a framework that's already had to deal with that.

The one other thing sometimes people say is, oh, we've never had to deal with something that learns out in the world. Of course we have. Again, think about drugs, right? We require a ton of information. Three phases of clinical trials, the FDA signing off, but what happens once a drug goes out into the world? Well, it interacts with everybody's different body, et cetera, in a dramatically new way where we get new information about harms that were or weren't anticipated.

So it's, and we have a framework that says when that new risk evidence comes to light, guess what? You can sue in products liability unless you’ve gone back to the FDA with that information. So that's a wonderful feedback loop. It's not that it couldn't happen under just a pure negligence world. In fact, again, the light between us is quite small because these are courts applying a negligence inflected test.

I have no problem with negligence as a standard of liability, it's just we have this well-developed model. And I think it's a model, not that everything is going to be easy, and we know ahead of time, this entity will always be liable. I mean, you know, we have, regardless of what we might say about the U.S. Supreme Court, right, they were on both sides of this using the same conceptual framework. And they just saw the application of the facts to that a little bit differently, but to me that sounds like a very good model to test out and see in different kinds of scenarios.

Bryan Choi: I'll just say that I think the supply chain question is such an, is a really interesting thing. It's a very fertile ground for further study both in the software context and in the AI context. And I think the IP sort of case study or example, right, it really brings to light how the supply chain is really can be really problematic, right?

Because I think there you really see the AI. developers having lots of differ-, not controlling the entire supply chain. So in other contexts, like in cars you have the entity is trying to control all the things, right? All the sort of case studies, the examples you know, how many miles you're driving, the corner, you know, pedestrian examples. All these things are generated by the company. They're sort of all in house.

And so you could say, well, it's easy, right? There is no supply chain problem because it's all happening under one entity. But the concern that I would have and that's kind of foreshadowed by the IP example, is if you start to bring liability and the companies start to get anxious about that, then how do they respond defensively, do they start to farm that out? Do they outsource it?

You know, actually, we're, you know, we're no longer in the business of manufacturing data. We're going to have an, ask another entity to handle that. And so, you know, you see the same kind of thing happen with car manufacturing, right? It's not, you know, you have different manufacturers that make they make tires that make other kinds of components. And you, you sort of farm all that out and outsource it.

And I will say, you know, one area where products liability does have an advantage is in this component supply chain problem. And, so, you know, I'm totally comfortable with that. I too do not have a problem with product liability in that sense. So, I would be supportive of having parallel actions, especially when there are certain problems that one theory can solve better than another theory can.

Chinmayi Sharma: I could ask questions all day, but I'm going to open it up to Q&A. Good

Audience Member 1: Morning, everyone. I'm Lovett. I'm actually an AI governance professional and also the founder of Voice of Responsible AI group. It's a platform for minorities to talk about AI governance and how it affects us. So I think my question is for Bryan.

I know you talked a lot about product liability. I know within the context of the United States, can we classify AI as a product? And if so, looking at the case in New York, Rodgers v. Christie case, the court actually ruled that an AI is not a product or a service. And for those who don't know about the case, it was a case between a lady called Christie whose son got killed, I think three days after someone was released from jail. And an AI algorithm was used to make the recommendation for the guy to be released from jail.

So she's claiming as a result of the release, her son got killed and is suing the state of New York for, like, I'm not a lawyer though responsibility for the son's death. And that case was actually, I think she lose from what I read and the argument was that AI services are not like products. So in the current context, where do we stand looking at a case like that when it comes to AI?

Bryan Choi: Yeah, yeah. That's a very, I would pass the baton to Cathy because I think you were, your piece really talks about this.

Catherine Sharkey: Yeah, so it's a great question. It's a pressing, it's a cutting edge legal question of the extent to which you could consider. So there's cases against social media platforms. And courts, actually in California, courts have gone in different directions. There's a state court case and there's a federal multi district litigation.

And so what I'll say is, and I talk about this a little bit in my piece, is that the, a court that's going to say that a product is kind of a, you know, traditional, tangible thing and static is going to be more likely to say AI and social media platforms cannot be a product. That's happened at the state level in California, in this California case. At the federal level, the court actually decided that the, and it analogized, it kind of still used this tangibility criterion.

And it analogized the idea that you can make social media platforms with certain types of features like parental controls, et cetera, and analogized it to like in the physical world you can buy prescription medication and there's like a child safety lock and in that sense it says we should go forward.

In my piece what I argue is you know what that tangibility line served a kind of historic purpose, but I think we have to get over that. And I suggest two things. One, doctrinally, courts have started to look at the idea of the mass production feature and a mass production of something leading to high scale of societal risk and saying, that's why we decided to use products liability historically. So maybe that's why we should do it, you know, today.

And then I end, not surprisingly, with a kind of rally call to cheapest cost avoider. And I point to a case, it's a Florida case against Lyft, where the court held that the application could be treated as a product. It was a horrible case in which a Lyft driver was distracted because they were required to respond to certain questions and they hit someone and the argument was why? Well, there's some arguments that this isn't a service.

Interestingly, if you look doctrinally, and I think Bryan would agree with me, the courts are all over the map about what's the difference between, they'll say a service is not a product, and then you say, okay, what's a service? I don't know. Sometimes it involves humans, discretion. But in any event, this court instead said, we're going to hold Lyft responsible because they're the ones who could have avoided this kind of harm by the way that they were designing that app.

So to me, that's the right reason to say something's a product, but the, but that's not like traditional doctrinal analysis. And the courts, in my opinion, have to lose their hold on tangibility the same way they had to lose their hold on transfer of legal title to get into that. But it's a great question.

Chinmayi Sharma: Thank you.

Audience Member 2: Hello. I'm John Bergmayer with Public Knowledge. So, in the discussion of the different liability regimes, I keep, like, thinking about, like, software per se almost has its own unique liability regimes, since software licenses, like, almost always have these, like, ridiculously broad disclaimers of liability, like, not fit for any purpose. And, like, stuff that would, like, clearly be, like, ridiculously unenforceable if, like a power tool tried to say like, oh, we're just not liable. And it's like allowed due to the magic of, well, it's a, it's copyright software. It's a license.

Well, anyway, so like what sorts of moves would be like necessary to prevent like this from happening to AI? Like regardless of what regime you pick as the best default regime, how do you just prevent AI companies from just using, just finding ways to just get out of it by software licenses or contracts and such.

Bryan Choi: I'll start, but I'm going to hand it right back to you because the answer there is that tort has not been applied in many software cases because of something called the Economic Loss Doctrine, which is something that Kathy is an expert on.

And, you know, the contract disclaimer applies to warranty law and to contract law, right? If you want to disclaim any warranties or any, like, you know, fit for purpose, right? Then you need that language in your to dis-, you have to disclose that to the purchaser of the product, right? And then you can disclaim it. Like, warranty and sales law allows you to do that. That's why those contracts are there.

Now, if tort law applied to software, they wouldn't have any effect, right? That only applies to this other area of law. And the only reason it has been so effective is because the courts have said, well, we're not going to apply tort law to software in most cases, unless there's physical harm or, or property damage.

And of course, harm to the computer itself is not, has been ruled out. That's not the kind of property damage that we mean, but I'm going to hand it back over to Cathy, because you've written about this.

Catherine Sharkey: No, I agree with that. And I would only add, so first of all, I think sometimes, as you mentioned, that there'd be these disclaimers and you wonder what is the disclaimer doing because you look at something and you know, it couldn't stand up in court.

So I don't know if you've ever, I'm idiosyncratic, I realize, but I read disclaimers all the time in things, for example, like a field trip that a child might have to take with school. And they disclaim liability not only for negligence, but for gross negligence, for recklessness, for kidnapping. I mean, you read this and you think, you know, who's the legal team that.

And you know, I've had discussions and same thing, you know, you go skiing. And you're being, you know, not only do you assume risks of regular negligence, but you cannot sue if someone, you know, intentionally throws, like, aims a snowblower at you and guns you down. Like, these are absurd. They wouldn't hold up in court.

The road not taken that, you know, again, I'm going to leave this for Bryan and I to continue to battle out offline. But one of the reasons it was the road not taken through kind of like contract and negligence, it could have taken that road in terms of like very much sharply saying under contract law, you wouldn't be able to disclaim these kinds of tort liability things.

Instead of going down that route, we switched everything at a point in time over into products liability and handled it there. But I'm not worried about, you know, if you can articulate like a design defect that's leading, yes, it's easier for physical injury. And property damage, but it doesn't stop there. It can be privacy kinds of harms. It can be reputational harms.

And then even in the economic loss rule, Bryan mentioned, like, when you have a product that just destroys itself, that's purely economic loss. But if it injures other, like if it's a product that injures another part of the product, and so there's an interesting question here about if you're going to articulate sort of AI as a product, and then something happens to something that hurts the other product.

Then you could get liability there, tort liability as well. So it kind of comes back to your first question.

Chinmayi Sharma: Moderator's privileged to kind of bring Katrina into the conversation. Do you mind, I feel like it's often, like, we don't talk about this a lot in software context, but the idea of software as a copyrightable thing with licenses as a core part of its distribution, do you mind just talking a little bit about kind of if there is a unique aspect to how these licenses work in the software world and why that might have led to this weird world of these disclaimers, maybe not being legally enforceable, but in practice on the ground, kind of just ending up with people not bringing cases against software manufacturers?

Katrina Geddes: Yeah, so software is definitely a strange one. A lot of people don't think it's, well, some people don't think it's creative and deserves copyright protection. It's obviously highly functional and utilitarian. Software programmers obviously argue that it's deeply creative and deserves protection.

There is currently ongoing litigation against Microsoft and GitHub because open-source software from GitHub was scraped and used to train their code completion model CoPilot. So that's ongoing litigation that's relevant for the generative AI space.

With, like, within copyright specifically there is this long standing tension between copyright and contract law. I can't speak to disclaimers of software, like, in terms of what software is intended to be used for but there is this interesting tension between software distributors who want to impose restrictions on how software users use their products and what copyright says they can do.

Courts have generally recognized practices like reverse engineering software as fair use. But software distributors are really mad about that, so they include terms within their software licenses that says that users cannot engage in reverse engineering. So there's a sort of battle happening between what software manufacturers are trying to prevent users from doing with their products, and then what copyright under the book says that they actually can do.

And then, you know, that leads to sort of broader questions about what is the best tool to regulate markets and creative expression? Is it contract or is it copyright? Is the copyright statute an onerous, one size fits all, standard form contract? Should we allow parties like software distributors to, you know, exploit freedom of contract to contract around statutory limits on their exclusive rights? This is sort of like an ongoing question and one that I cannot resolve today.

Audience Member 3: Hi, thank you. My name's Dean Ball. I'm a research fellow on AI policy at the Mercatus Center. So, it seems to me that any liability regime, and I've tried very hard myself to sort of figure out how these various regimes would apply in practice.

It's like substantial uncertainty as to how it would actually all work. And at the end of the day, we're kind of leaving ourselves exposed to whatever the fact pattern is of the initial cases and sort of what the judges decide and what the general mood is toward AI at the time that those cases are decided.

And I just wonder if you all balance the kind of, there's the intellectual satisfaction of getting this right. And then there is the question of, is this the most important discovery in the history of the human species. And that will define geopolitical competition for the next hundred years that will save potentially millions of lives and, you know, many other things. What is the risk associated with a liability regime of messing that trajectory up? I'm just curious if you balance that at all.

Catherine Sharkey: Yes. So, that, and again, we can get into disputes about like, so when I talked about these phases of products liability with transformative technologies, I don't think it's that different, you know, like the automobile, which was this thing of danger and leading to carnage was also seen at the time as revolutionizing everything about society.

Mass production of goods, same thing, was revolutionizing, I think, the digital age, like, namely, in the online platform economy, how consumers were receiving things was revolutionary. And so I think that's appropriately balanced in terms of a risk utility type, you know, test and I also think it's kind of feeding into my point at least earlier about what are the dangers of just saying regulate now, right? We're going to hear about this later when you could get that horrifically wrong. But also I don't want to have like a regulatory void. And just say, there's immunity because it's not, you know, a product. So, absolutely.

Quick footnote, because I just want to say back to the software question, here's an interesting just data point, right? The FDA regulates software as a medical device. They have guidance, et cetera. That to me is a great example of like, if you want, for underlying reasons, to be regulating something because of the medical risks, et cetera, you just do so. Now, there's. It's not that easy. We have to get into what's a product, what's a service, et cetera.

But we have some examples that are a little more clear cut. Look just to the FDA. Why are they regulating software as a medical product? And maybe we could get some of that learning over into the tort space.

Bryan Choi: Can I just say like the, so the accelerationist kind of tendency to focus on long range risk one of the things that, and that might be appropriate from a regulatory standpoint, which we'll get in the afternoon panel, but from the, one of the advantages of the litigation approach, right, the liability approaches that focuses on actual parties who are harmed in the moment, right? And to say their interests matter too.

And so there's a kind of a philosophical you know, a question, a moral question of you know, should we just say, you know, those, you know, the injuries to those parties don't matter because of the long term, like, speculative benefits or, you know, whatnot, or should we actually allow those claims to be litigated? And I think that's one of the values of bringing the courts into the process.

Audience Member 4: Hi, I'm Chance Goddard, a 2L here and doctoral candidate in information technology. My question is, to what extent can privacy laws impact this products liability analysis for things like failure to warn claims for automated decision making as we've seen earlier with AI?

Catherine Sharkey: It's a great question. In the places where I've looked at this, like data breach areas, there's a, there's, it's back to Chinny's sort of first question of like, what's the harm and how could it be articulated and there's been a move because of standing issues with people saying like oh there's something that I don't like about how my data has been released.

But I'm not sure how to articulate it as like a property is that privacy and the U.S. Supreme Court has said this in some of the standing since it's this age old tort, you know if you can show real privacy harm, so I think it gets my own, you know very fledgling view.

It gets back to Chini's original question about articulating, you know, privacy harms, the classic processor privacy harms actually, in my view, like protect different interests. And there's one that, for example, is like the right of publicity, which would start to interface in some of the things that Katrina was talking about.

Then there's like the continental right to be left alone that people still like to talk about, but I don't see a lot of the courts taking that on. So, but it's a great question and I think a lot more work should be done to like update those privacy torts into this new age of digital harms.

Audience Member 5: I sometimes teach tort law to people and I say they're misfortune. And I really enjoyed reading the papers. But I suppose my question is, whether you're really sticking with a one size fits all approach to liability, the notion that it's all products or we can apply a negligence liability rule to all of it.

My basis for that, it's, you know, it's all biographical, isn't it, autobiographical. I've got a relative who's younger than almost everybody in this room who writes AI programs, you know, even as an intern. And those are programs that are largely used internal to the company he's interning for to get them better information about stuff that they already think they know something about.

Now to shift because it's an AI system they're now using it's going to be product liability to those that it affects is actually a big leap from what the current position would be. Now, there might be some situations where the leap's far less than applying product liability. You know, it's pretty close to probably the liabilities that they'd already have, say, a medical device.

But it strikes me that there might be some AI systems where the leap is much greater. And so I guess I just wanted to know, are you really sticking with one size fits all? It's all product, or it's all negligence.

Bryan Choi: So are you concerned that your relative is going to be held viable for the work that they're doing?

Catherine Sharkey: Bryan is available for legal advice. I am not.

Audience Member 5: If you want a real question, if you're going to say, hey, you need to be insured to be doing this you do affect where the little startups and the like, how much resources they have to have if they're going to start putting stuff out. And so it does, you know, if you have to have the sort of insurance that you'd need to stand behind a product liability suit that's a different sort of proposition for some of the little startups that relative might have been involved in.

And so I think, you know, it's not a direct concern about liability, but it's a direct concern about, you know, what the consequences are of a, you know, what amount of kit you have to participate in this space.

Bryan Choi: Yeah. Thanks for the extra clarification. I'd say, do we compare this to something like nuclear engineering or bridge building or other kinds or, you know, you know, keeping wild animals? I mean, just what is the nature of the AI risk and the activity?

And so is it something where you say, yeah, we want to have anyone and, you know, whether they're trained or not just mucking around in this and that's great for innovation, or do you think we should be more mindful about how we build these systems?

And that applies, I think, to software as well. And I don't want to answer that question, but I think there are reasonable minds that can differ on that. But I think that is sort of the core intuition of, you know, should we be allowing anyone to have their hands on this or should there be some restraint, whether that's imposed by markets or by law or by liability risks, insurance, you know, so forth, right? I mean, how you construct those, I think that's one of the questions.

Catherine Sharkey: Very succinctly, let me break this into two questions. The first one, I guess I don't see this as one size fits all. And later in the afternoon, we're going to hear from Eugene Volokh about First Amendment and free expression. So not everything, even though I'm a fan, I'm team products liability.

Not everything's a product. So some things, there's going to be information expression. So it's not going to be like I'm going out and everything under the sun is a product from now on.

But I do think that and I don't think it's such a leap, like I actually think courts are gravitating in the direction, not just the FDA. I agree with you. It's a smaller step maybe for them to deem software a medical product because they've got AI enabled medical devices.  But I mentioned the social media cases of finding design defect there.

But on the broader kind of question that Bryan's already answered as well, of this kind of like, it's sort of a philosophical decision about optimal regulation whether we're thinking about tort liability or regulators I have a really strong view that the timing matters and actually we should have liability at the outset. And the worst case scenario and there are unfortunately some examples of this are to wait and do nothing and then start to regulate.

So think about for example direct to consumer genetic testing. It was like the Wild West. There was no one regulating doing anything and then the FDA decided to treat that as a medical device and 23andMe, which of course is not the big conglomerate that everyone worried about a few years ago right there in some financial trouble, but nonetheless what happened was the regulation happened in the middle and ensconced a huge incumbent. 23andMe was the only company then that had FDA approval for what it was doing. So I actually worry more about waiting until we regulate.

One other quick example, online platforms, right? Amazon was fighting tooth and nail to about being held as a seller. And as soon as the courts started going in that direction, they then got behind legislation that would hold everyone liable, all the small startups of the world. Because think about it, there are huge incumbent that could satisfy the reserves and getting insurance, et ceterra.

So to me that all argues for making sure we regulate early enough. And yeah, if you're a small startup, but you might be posing wide scale risks to society, you should be thinking about liability and insurance. And I think that's a positive thing. But I actually worry more about the do nothing and then start to regulate because we have antitrust law and other things that we'll talk about actually today too. But we do, the liability regime then really favors incumbents at that time.

Chinmayi Sharma: So with that, I want to thank all of our panelists and really urge you guys to read the primers. They're excellent. And then read the body of work that these guys have developed over their careers. Thank you so much for your questions.

Alan Z. Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jan Patja and your audio engineer this episode were the good people at the Georgetown Institute for Law and Technology. Our theme song is from Alibi Music as always. Thank you for listening.


Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.
Catherine M. Sharkey is the Segal Family Professor of Regulatory Law and Policy at New York University Law School.
Bryan H. Choi is an Associate Professor of Law and Computer Science & Engineering at the Ohio State University. His scholarship focuses on software safety, the challenges to constructing a workable software liability regime, and data privacy.
Katrina Geddes is a Postdoctoral Fellow at the Information Law Institute at NYU School of Law and the Digital Life Initiative at Cornell Tech. Her research focuses on technology law, intellectual property, and information capitalism.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare