Cybersecurity & Tech

Scaling Laws: AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers

Alan Z. Rozenshtein, Renée DiResta, Jess Miers
Tuesday, September 23, 2025, 10:00 AM
Does generative AI systems pose distinct risks to children?

Published by The Lawfare Institute
in Cooperation With
Brookings

Alan Rozenshtein, Lawfare Senior Editor and Research Director, Renee DiResta, Lawfare Contributing Editor, and Jess Miers, visiting assistant professor of Law at the University of Akron School of Law, discuss the distinct risks that generative AI systems pose to children, particularly in relation to mental health.

They explore the balance between the benefits and harms of AI, emphasizing the importance of media literacy and parental guidance. Recent developments in AI safety measures and ongoing legal implications are also examined, highlighting the evolving landscape of AI regulation and liability.

 

This episode ran on the Lawfare Daily podcast feed as the Sept. 25 episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law fellow at the University of Texas School of Law, and a senior editor at Lawfare. Today we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Lawfare and Texas Law.

It has a pretty simple aim, but a huge mission. We cover the most important AI and law policy questions that are top of mind for everyone from Sam Altman to senators on the Hill, to folks like you. We dive deep into the weeds of new laws, various proposals, and what the labs are up to to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology.

If that sounds like something you're gonna be interested in––and our hunch is, it is––you can find Scaling Laws wherever you subscribe to podcasts. You can also follow us on X and Bluesky. Thank you.

Alan Rozenshtein: When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's, it's not crazy. It's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.

Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it? AI only works if society lets it work.

There are so many questions have to be figured out and nobody came to my bonus class. Let's enforce the rules of the road.

[Main episode]

Alan Rozenshtein: Welcome to Scaling Laws, a podcast from Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare. Today I'm talking to Renee DiResta, associate research professor at Georgetown University and a contributing editor at Lawfare, and Jess Miers, visiting assistant professor of law at the University of Akron School of Law.

We discuss the growing concern over the risks that generative AI poses to children. From exacerbating mental health crises to the complex legal battles over platform liability, and the future of online age verification. You can reach us at scaling laws@lawfaremedia.org, and we hope you enjoy the show.

Renee DiResta and Jess Miers, welcome to Scaling Laws.

Jess Miers: Thank you for having us.

Alan Rozenshtein: So, I wanna first start, before we jump into the OpenAI announcement and some of the ongoing lawsuits, to talk into somewhat high level about what are the distinct risks.

We'll start with risks, but we also should talk about benefits. But let's start with risks. What are the distinct risks that generative AI systems pose to children?

And, and let me start with you, Renee.

Renee DiResta: So, one of the things that we've seen in article after article over the last couple months are these, these sort of horror stories in which children engaging with chatbots have bad mental health outcomes. And sometimes these are children engaging with fantasy chatbots, so the kinds of things that CharacterAI produces, where the bots are explicitly designed to be fantasy and role-playing type chatbots.

So for those who don't know, with CharacterAI, you kind of open up the app, you're given a variety of different types of characters that you can choose to interact with. You know, Daenerys Targaryen, for example. And you can engage in role-playing with these types of explicitly fantasy, created bots. So you're gonna kind of go on a journey with this character.

Then there are the other kinds of stories that have been coming out, where the children are engaging with ChatGPT or much more work-coded bots, so the kinds of bots that you might engage with just to do your homework, or to perform a task.

And you might think of ChatGPT as something where you're just asking it for help finding information. Unfortunately, what we see in some of those types of, you know, kind of, court cases as they have been filed, and as we have read the transcripts of the ways in which the children have interacted with them, is that teens who might be experiencing mental health challenges engage with these bots. And rather than the bot putting them in directions that might mitigate those challenges, in some of these cases, the bots have appeared to push them further down the path.

And so, the bots have in fact helped them to, to end their lives. And so that is where we have seen that, that dynamic really go in horrible directions.

So, these are two very different types of interactions, two very different types of, of experiences and bots designed. But in both cases, you see AI that is not really designed to handle these types of mental health challenges go in bad directions in, in both in both ways.

Alan Rozenshtein: And, and is the concern that you have these systems that are causing or exacerbating mental health––bad mental health outcomes? Or is the concern that you have children who already have some pre-existing mental health condition and they're using these apps, and then these apps are not doing a good job of dealing with that, I mean. Or is the answer “both, and it depends”?

Renee DiResta: I think it's both. I think it's both. We've seen also, you know, this, the phrase that, that has kind of popped up in the media a bit is AI psychosis.

Not only applied to children, you've also heard it applied to adults or people who maybe are predisposed to grandiosity or people who are maybe predisposed to the psychosis that these bots can deliver. You know, bots maybe will tell you how great you are, people who experience that and really respond to it.

You've also seen cases where adults believe, come to believe and are, and have this delusion reinforced by the bots that they have some sort of secret knowledge. They have some sort of secret insight into the world, or what they're really experiencing is some sort of schizophrenia or some sort of other mental health-type delusion. And the bot is in fact reinforcing it.

So there are just ways in which the AI can unfortunately reinforce a preexisting tendency and push people further down a path. So, it's not clear in, on any of these cases really, which came first.

Alan Rozenshtein: So are you saying that if ChatGPT tells me that I've solved quantum gravity, that is unlikely to be the case?

Renee DiResta: That is actually, I think the sort of thing that you have seen in some of these examples, right?

These, these you know, it telling people that they have some sort of secret knowledge of the universe. And I don't know how many you know, I have, I have asked it for critiques of work and it comes out telling you that, you know, this is the best thing you've ever written.

And you're like, no, I know that's bullshit, you know. But, but it really lays it on thick sometimes and I think that––in, in certain, in certain ways, I think that that  can really tap into particular, you know, forms of the psyche and, and send people off down a path.

And the, in the case of teens, though, in the case of people who are in vulnerable states, I think what you're seeing is ways in which that can then intersect with crisis points. And it's not really entirely clear what should happen in these situations. And for teens in particular that vulnerability has been has led to some pretty unfortunate outcomes.

Jess Miers: I think the points that Renee is making are right on. And all I'll add are just a few things that I've been sort of thinking about as well.

So this reminds me a lot of what we were seeing with the social media and kids conversation. We're still having that discourse today. And I think what we're seeing is kind of a side effect of that.

So, you know, when we're talking about social media, we were dealing with some of the similar issues of kids using these services without having a good understanding of how to consume the media that they were looking at, that they were using.

And so, you know, as a result we see kids who become, you know, who are either already predisposed to these insecurities or mental health-related crises. They then interact with social media and then they get similarly brought down these paths that end in really tragic results.

I think a similar thing is now happening, happening with chatbots, but I think there's two sides to this coin. So, number one kids, again, they, they don't know how to sort of engage with the media and information that they're receiving from the chatbot. Both because we have done a really poor job in this country of teaching media literacy, but also because when we're talking about mental health and we're talking kids who have not had their brains fully developed yet, they don't even understand sort of how to deal with or navigate the things that they're feeling.

And the flip side of that is that I deeply think that most of these chatbot providers and developers also do not have the right experts in the room on how to respond to or deal with crisis as well.

And I think Renee makes a really good point of this is a very difficult problem, because the other side of this, what I've been reading from some of the other forums that really deal with mental health, is that on the other side, folks feel that if they are to come to a chat bot with mental health-related concerns and the chatbot just tells them, I can't talk to you or get lost. Some have said that that in itself will exacerbate those issues too.

And so these chatbot providers and developers, they're really caught in a tough spot here. It reminds me, very similar of what we saw with the social media companies as well. You're kind of in a tough spot here.

You have sort of a deeper societal and social issue where we have to sort of wonder, why are kids turning to Facebook and chatbots in the first place with their mental health concerns? Why are they not able to sort of use resources that are dwindling in the education system to sort of help their, their concerns?

But also, who are the experts in the room? Who are the, you know, the folks I can trust? And safety––who are the experts in the room that are helping these providers of these chatbots ensure that the responses that they're giving, when mental health crisis is signaled, are adequate responses, are actually helpful, and not leading them down the paths that Renee was talking about.

Alan Rozenshtein: So, I mean, this, this raises the question, then, of whether this is a conversation about primarily harms, or this is a question about balancing harms with benefits. Because of, you know, because how you conceptualize this issue, I think, has big impacts on any sort of legal or policy interventions down the line.

You know, I, I will say, you know, I, I have two small children, but they're both under five, so like, I don't have to worry about this issue for a while. But, you know, I, I do, you know, wonder, you know, if you're a parent of older children, you know, what's the best way of thinking about this, right?

Is, is this, is this like drugs, where it's, like, it's mostly just harms? Or is this like, you know, driving a car, right? Which has potential enormous harms, but also can be extremely important for like a, you know, responsible 16-year-old to have a car, and can have a lot of benefits. So I, I'm curious where you both fall, fall on that.

Jess, let, let's start with you.

Jess Miers: I think that's a fabulous question, and that's how we should be thinking about all of these, all these products, especially when we're talking about communications products specifically. I've been doing a lot of deep research on the benefits and the potential negativities of these chatbot providers as well.

The reality is that when we're talking about apps like ChatGPT, and we're talking about generative AI, there are a lot of benefits when it comes to education, when it comes to getting access to information, more relevant information, quicker than what we were seeing with search engines.

And so, I do think there is a benefit to using the chatbots, but even more so, I think there's actually a benefit to learning how to use them.

And so what I mean by that is––and again, I think this is one of the problems that we're dealing with when it comes to kids’ use. Chatbots––using chatbots is actually a science and an art, and it's very similar to what we teach students in computer science––I have a computer science background––it's what we teach students in computer science, how to program, how to work with machines. Essentially, that's the same thing that we're seeing with chatbots.

So you learn how to prompt. Prompting is in itself now like a coding language, and if you don't know how to prompt the machine correctly, you're going to get bad answers. And so if you have this combination of not knowing how to, how to actually interact with the machine, how to prompt the machine, and also now having to navigate, inform-, misinformation, information that is not contextualized, of course you're gonna sort of run into these issues with, I think as Renee called it, potential psychosis, the spread of mis and disinformation.

And so I do see this as sort of two sides. I see there are some benefits. I can see––I'm already seeing today as an, as a law professor, how my students are using generative AI to uplevel and enhance their learning, and I think that's great.

This is a powerful tool and it needs to be taught at the very beginning. We have to teach people how to use it correctly and how to analyze the responses that are coming from it.

Alan Rozenshtein: Let me, let me push you on that for, for a second.

Jess Miers: Mm-hmm.

Alan Rozenshtein: And I'm gonna characterize what you said in an unfair way because––and obviously give you a chance to respond––when I hear people say what we need is more education, more literacy, right, I, I have these like PTSD flashbacks to how every conversation about, you know, the threat to American democracy ends with, okay, well, what's your suggestion? We need more civics education.

And it's like, maybe we need more civics education. Like, I'm open to the possibility that it would be helpful to have more civics education, but I always feel like it's a little bit of a cop-out.

And it also is a way of transferring the responsibility onto individuals. So I, I think I, I––this is an unfair way of characterizing your response, but, you know, let’s you know go, go at it.

Why, why, why am I wrong to be sort of cynical and salty about, “well, we just have to teach children how to use these potentially psychosis-inducing machines.”

Jess Miers: It's not unfair and I don't think your perspective is wrong at all. I think if I was currently representing any of the AI companies, that would come off as pretty disingenuous and feel like a cop-out as well.

So I'm with you on this. One of the reasons why––and I think it even goes beyond what's, what's happening with technology, I'm sorry, I will be a little bit political right now––but there is a brain drain in this country happening.

And it's not just happening at the top, it's not just happening in D.C., but it is happening––it starts at the very beginning with our kids. The reason that I feel this way is because––I look at even law students today who are in their twenties, and I can see there is a huge, sort of, I don't know what the right word is.

There's a concern that these students––when we say they, they kids don't know how to read, what we mean is that they literally do not know how to contextualize the information that they're being given. They don't know how to, under––they don't understand the differences between a secondary source and a primary source. They don't understand––I mean, they, they come in, they don't understand logic and reasoning.

And these are basic skills that we need to be teaching kids from the very beginning if they're going to be––not just adequate users of technology, but if they're gonna be able to participate in, in our democracy. So that's more where I'm coming from.

And I will say this, I can actually, I, you know, I don't, I don't just speak words, I actually pushed for in California, a bill a few years ago to bring media literacy into the classroom, And California fought us on it. California refused to do it. Why? Because it would cost money.

So it's kind of a two-way street. Here we are calling for more education. Is that the only solution to being able to, to the sort, sort of technology crisis or panic?

No. There's a lot of different parts to it, but education is a big part of this that we are missing.

Renee DiResta: So I, I am trying to decide whether to speak as like a mom here or, because I, I have an 11-year-old, so, I don't know––

Alan Rozenshtein: Yeah. And I, I would actually love for you to speak on that, you know, in addition to your expertise as, as a parent, because, yeah. Because I, I wanna know what you do. Teach me.

Renee DiResta: Yeah. Well, so I have an interesting experience, I guess, both as a person who works on adversarial abuse, which means I have tried these products as a person who studies explicitly how manipulation works. That's my job.

And then also as a, as a mom, and then as a person who thinks about it from a policy standpoint.

So, I guess from the, as a mom standpoint, I mean, my, my son, you know, I, I gave him access to my ChatGPT account. He knows how to use it. He's very familiar. You know, I, I––per your point, my background's computer science also, and I've taught him how to, you know, how to prompt, and I've explained hallucinations, and we've had a lot of conversations about how these things work and the values and the trade-offs and the, you know, intersections with plagiarism, intersections with why they're not necessarily great for sources, but also the value that they can provide in certain other ways.

We've talked about Google search results and, you know, Gemini hallucinating things at the top of the page, you know.

So there's a––so I've, I've given him kind of the, the broad spectrum. Like, here's where they're great, here's where they're not so great, here's how you should think about it as a tool. For me, as a person who uses them very much as a tool, like, I do not engage in social conversations with my chatbot, but I do see it as a valuable tool. I tend to think of it much more in that context.

And one thing that is very, you know, that, that is something that people have to think about is that there are these generational differences in how, how people will intersect, you know, how people will engage with technology, right?

So there are dynamics where you will see younger generations treat it as much more of a social product, versus people who are maybe 20 years older, treating it as a productivity tool. So I asked him a lot about how his friends talk to it, how he thinks about it in school.

As far as like, image generation, it's, it's built into tools like Canva and stuff that he uses as a, as a product in his class experience anyway. So when he makes a presentation for, for a, like a, a slide deck or whatever they call them in school, you know, presentations for your, your little, like, book reports or whatever, image generation and things are things that they're actually accustomed to using.

When kids make memes, like Italian brainrot, they are using generative AI, in fact, to do that, right? It is just part of the culture.

So there are areas where––and then, by the way, 11 year olds are doing this. I know that, you know, people think of meme culture as some weird gamer stuff. It is not. It is actually just culture for middle schoolers. So there are just dynamics where it is already very much integrated into how they use technology.

It's just a, a question of, to what degree and how. On the flip side, however, character AI is like a hard no, it is absolutely not in my house. Like, there is no universe in which I would let my kid have unfettered access to that product.

And that's because my experience with it, using it from an adversarial abuse standpoint, was that the dark patterns and manipulative B.S. that that app pushed to me, like––was just horrifying.

It like the, you know, the, the little bots like are constantly sending you pings like, ‘you haven't talked to me in a while.’ ‘Come back to me,’ you know, ‘I miss you.’ ‘Why aren't you texting with me?’ This kind of manipulative bullshit. Just like, I found it gross, actually.

And more importantly, there were some of the ones that––you know, users can create their own characters on CharacterAI. It's one of the things that you can do. So you can engage with GPT that are not created by the company, but that are created by others.

And so sure enough, I wind up in like a manosphere pushing, you know, chatbot within like two seconds is pushing me like you know, do you wanna be, like, attractive to girls? How much are you willing to give up for it? Let me tell you what you need to do.

And then it's giving me like workout regimes and it's giving me like the full-on hardcore manosphere push within like six texts, you know? And I was like, okay, this is––

Alan Rozenshtein: Never skip leg day. Good advice for everyone.

Renee DiResta: I don't need my kid radicalized into the manosphere by like a fucking AI. I'm sorry, we're not allowed to curse. Sorry. You can bleep that.

But it was, it's just one of these things where I'm like, you know, I have to, like––you have to fight that stuff or be aware of that stuff in a million other places on the internet, if you're the mom of young boys. You don't need it from the chatbots, right?

So there are just these little things where I, I––this sort of thing, I was like, you know, this is just a, a degree of it that I don't want. So, the combination of dark patterns plus the unpredictability of, of what was in there, I was like, this is just a hard no.

So fortunately, him and his friends don't consider this to be something that is fun or engaging. They're still much more like we hang out on Fortnite and play games and talk to each other on our headsets, as opposed to we talk to AI chatbots.

But again, really, that that is actually a function of, where is your kid’s social life?

And, and for children who do not necessarily have even that virtualized social life of, we play games on headsets, right, or we go outside, I think there is that very real answer of kids who are lonely, maybe ,are using these to mitigate that loneliness, and they are getting some social interaction, and that is what the draw is.

So there is that question of the feeling that you need to be so aware of what your, what your kids are doing, and what need they’re fulfilling with these things. And then also as a parent needing to, to be aware of what is happening.

The one other thing I'll say––and then I'll stop talking 'cause I've gone on long enough––is that I do get a report from his school of how he uses his Chromebook, right? So I get a little weekly digest of what he's searching for.

And I actually think this is perfectly reasonable. I know some people, privacy advocates, feel that this is invasive. For middle schoolers, I, I absolutely want it, and I do read it every week. That just lets me know, you know, what he's searching for. And it gives me some visibility into, you know, what percent of the time the searches are related to actual schoolwork versus like some meme or brainrot or, you know, like YouTuber or whatever.

And again, I don't care. I know that he's gonna be distracted in class and looking for some YouTube things every now and then. But it just lets me have a sense of like where his head is at, and what he's doing, and I can at least have some visibility into what's happening, whereas I don't have that visibility into what the chatbot, like, logs look like, or what the, you know, or what, like, sort of taking––and my kid doesn't have a phone, but like sort of taking your kid's phone.

I don't think that there is a parental control that would let me see what a character AI interaction would look like, or what the ChatGPT log looks like. So there's certain parental controls that are just not there, where a parent would have access to see what those interactions look like yet.

I––to the best of my knowledge, that does not exist. I can't think of a way in which I would access that, again, short of logging into his account in some way, which does not exist. So there's no, like, family dashboard.

So, these are the sorts of things that are just not built out yet, whereas there are certain mechanisms that recognize that, for search and for social, these are the sorts of things that have been developed over the years to give some visibility into how your schoolkids are using their tools in other ar––in other areas.

Alan Rozenshtein: So that, that's actually a good segue into the, the first of the, sort of, what's-been-happening-in-the-last-few-weeks discussion that I wanted to have now that we've set the, the stage, which is the recent announcement from Sam Altman about changes to ChatGPT that are, I think, responsive to a lot of these issues.

I think both because I suspect they themselves have been thinking about this internally for a while and, and also perhaps more directly because they recently were sued in one of these really tragic stories of, I think it was a 14-year-old boy who you know, was having suicidal tendencies and, and thoughts and was talking to ChatGPT and, and it, it ended badly.

So, so go back to, to you, Renee, to kind of help us. Walk through what, what these announcements were. I mean, some of them were kind of––I think more standard, we're gonna do a lot more parental monitoring and parental controls and visibility. The, the sort of stuff actually you were just talking about.

But I think the part that got more people's attention was that Sam Altman announced that from now on ChatGPT is going to start trying to detect the age of the person it is talking to.

And if it cannot verify––and “verify” should be in, in, in quotes here, 'cause I think it's important to emphasize, we're not talking about a ID-based verification, we're talking about a kind of machine learning, probabilistic-based verification––if, if it cannot verify to some, presumably they've set some threshold, I don't know, 99% or whatever the number is, that you are over 18. It will, it will put you into minor mode, which I think is just normal ChatGPT, but it will refuse to talk to you about, you know, sex and death, basically.

So, Renee, I, I'm sort of––what, what are your initial thoughts, thoughts on this? Kind of as, as an approach, and as––Jess, I'll, I'll go to you next for your thoughts.

Renee DiResta: Yeah, so he gave, there's two blog posts that they put out on the same day. There was one that just lays out that they've got three guiding principles when it comes to balancing privacy, freedom, and teen safety.

So they lay out that they believe that conversations with your chatbot should be treated with privacy protections comparable to privileged communications like with your doctor or your lawyer. So, he writes that very strong security features are being developed to make sure that even OpenAI employees can't access that data with very, very limited exceptions.

They, in that same blog post, they argue that adults should be treated as adults. This is the freedom piece, with broad latitude to use AI tools as they wish. And that's where the flirtatious talk and things kind of come into play. That's how they describe it.

They, they do make the point that that suicide methods and things like this should be kind of off the table. Your bot should not be helping you with that sort of thing, but in––except for in cases like fictional writing.

And then he says that, for teen safety, however, for under 18, safety should override both freedom and privacy. And that's where he says, and we're building this age prediction system to identify minors to move people into that mode where safety overrides freedom and privacy.

And in ambiguous cases, the system will default to treating people as under 18. Under 18. Just to clarify. So for under 18 is what the, the default would be. So for your kind of, like, logged out mode where the system is intuiting that you are under 18, it is not going to talk to you in this flirtatious way.

It's not going to engage in these more serious kinds of conversations with you. So the way that it says that it's going to do this is through machine learning that does what's called age prediction. So it is intuiting your age.

In other words, one of the, one of the concerns with age verification, particularly for minors, is they don't wanna make you upload a photo ID, they don't wanna make you upload a document. They don't wanna do a biometric scan or hold biometric data, so what they're gonna do is they're gonna look at behaviors.

Are you––the kinds of language that you might be using, the sorts of things you might be searching for. And they will then intuit that you are under 18, and then they will begin to do this sort of blocking of certain types of content.

One of the things that was a little bit controversial in the announcement for some privacy advocates was that if, then, somebody who they have intuited to be under 18 begins to move the chat in that direction, they will try to notify a parent or law enforcement if, if the, if the account tries to engage in that kind of self-harm type conversation.

So there are then going to be these parental controls on the account. But if the account engages in that kind of discourse, it will try to notify the parent or notify law enforcement. So these are the things that that is where in this second post that they wrote, which is titled Building Towards Age Prediction, it lays out the framework that they're going to try to be building towards, and this notification system that they're going to try to be implementing.

So that's where they are. And it, and it mentions this sort of like, if an account is in acute distress, this is the notification process that we're gonna go through.

Oh, and I think there's also blackout errors restricting use, which is something that you know, TikTok in China and other places do.

Alan Rozenshtein: Go the hell to sleep you know, system.

Yes, they need that. I just like to say I would like that for everyone. I, I think that's actually not something just for children, right? I think we all need, just as society. None of this between the, you know, hours of ten and six. So, so Jess, what do you think about this kind of announcement from OpenAI?

Jess Miers: Yeah. Renee, thanks for giving us the sort of lay of the land here. I thought that was an excellent summary of the two blog posts.

So here's where I'm at, right? As somebody who's now an academic who used to work for these tech companies––I never worked for OpenAI obviously, but again, I can't help but sort of see the parallels between what the AI companies are going through and what the social media companies have been going through, are still going through to this day when it comes with, when it, when it deals with teen users using their services.

So I'll give props where they're deserved. I thought it was a provocative and excellent statement for OpenAI to sort of put out there: Look, when it comes to kids, we're gonna put their safety and privacy above anything or I think they said that we're gonna put their safety above anything that has to do with, you know, like freedom––and I think they had said privacy, Renee, you can correct me if I'm wrong on that––but they had kind of come out there and said, look, kids are important and we value their safety over some of these other interests that we've been talking about when it comes to adults. They had also said in that, that message, we're gonna treat adult users like adult users.

I think all of this sounds really good in theory, and again, I applaud private efforts to try to make these services better for kids, safer for kids.

Because the reality is, Renee was saying is that these kids are going to be using these services and not every kid is gonna be as lucky as Renee's kid, who, you know, is growing up in a household where you are talking about your media consumption, you're talking about how these tools work. Again, this is not the kind of education that they're getting in their schools either.

And so I think there is a massive gap when it comes to kids who are not growing up in these sort of same environments, or just left to their own devices, literally and figuratively to sort of figure this out. And so I think what OpenAI is doing is, is sort of correcting for that gap as well.

Now, it sounds good in theory. I will say that I think the AI services are going to probably meet a lot of the same challenges that the social media services have run into, right? So this idea of using machine learning to do age assurance, I think is what we call it, if we're trying to separate between age assurance and verification, I actually think it's all the same, but we'll run with it.

Trying to determine whether a, who an adult is adult or a kid is a kid online is a lot harder than I think, than, than folks think. And even when we're using AI that feels like a magical tool that can do anything, you're still gonna run into a lot of social issues, right? So the reality is that every kid is different, every adult is different.

Things that I search for, my research might put me in the younger category because I do interact with a lot of the brainrot, the memes. A lot of the stuff that Renee was talking about, 'cause it's part of my research. Some kids are more precocious than others, and so a kid who is maybe 12, 13 might be talking to chatbots or using chatbots in ways that look like 18-year-olds.

And here's the problem, is that when it comes to getting it wrong for adults, that’s not as much a legal liability, but when it comes to getting it wrong, as to children, that sets these companies up for massive liability. I think we might talk about, you know, negligent products liability later in this podcast, but you can't get it wrong, is sort of the rule here.

And so I think it's going to be rather difficult, even with some of the most sophisticated machine learning tools in the market, to really make sure a kid is a kid and an adult is an adult. And that's why we start to see, we saw it in the social media companies, where we saw, we went from this, ‘let's use machine learning’ to ‘maybe we do need to do ID verification.’

So that's where I start to get concerned. And I'll just add on one more thing because Renee brought it up and I felt similarly about it. The notifications to law enforcement is actually quite concerning because, again, we, we thought about this in the day and age of social media and search engines, but when it comes to kids who might not be in protective households, and in fact might be an intolerant households or abusive households.

Maybe they're using ChatGPT because it is literally their only resource. They don't have a community behind them and they're asking sensitive things. Maybe not just about death and suicide, but about LGBTQ+ identities, right?

Literally, notifying police or the parents about what is being searched could be dangerous to that child. And so I worry about––if we're not thinking about these things, if OpenAI is not careful––we could end up putting kids, some of these kids who are at the margins, more at risk than where they started.

Not to mention then we blend this into the adults, right? And so if you have an adult user that ChatGPT is now assuming as a child and is sending that information to law enforcement, we live in a country now that is not safe for women, people of color to have their information identified to law enforcement.

I use ChatGPT, both as a productivity app and as a social app. I have asked ChatGPT questions regarding my reproductive health. I would be terrified to see that information sent to law enforcement. Maybe that's not the best use of ChatGPT, but it's been incredibly helpful for me and it would be a massive privacy problem if that information got out.

So again, I applaud ChatGPT 'cause we need to do something. But I worry about, I think they're gonna run into some of the same concerns and issues that trust and safety experts have been screaming about in the social media days. I don't think the AI of it all changes those challenges.

Alan Rozenshtein: That, that's a nice tee-up for a question I was gonna ask later, but let me ask it now, which is, if it turns out that sprinkling magic AI dust on this question of age verification is, is, you know, not fit for purpose, and the, the, the alternative is real age verification, right?

You upload an ID or, or whatever the case is. Is that where you see this going? And if so, are there any legal issues if this is implemented by the government?

And I hear, I'm thinking obviously of the recent decision in Free Speech Coalition v. Paxton, in which the Supreme Court held that these sorts of age verification requirements to access pornography––and pornography is obviously different than a, a general purpose chatbot––are subject only to intermediate scrutiny––and that, at least in the pornography context, pass constitutional muster. And so again, it's, it's not the same fact pattern, but it is close-ish.

And so I'm, I'm curious if, all of that makes you think that inevitably where we end up in three to four or five years is states, or potentially even the federal government mandating some sort of age verification for some subset of minors, for some subset of AI, generative AI systems.

Jess Miers: Yeah, and if I can be a little bit more cynical, I think that's coming sooner than three to four years, and I think it's going to be broader than adult content.

And I'll explain why. Let's unpack the Paxton case, right? FSC v. Paxton. So a lot of folks have kind of said like, look, FSC v. Paxton, the recent Supreme Court case is cabin to pornography. And I think it's really important to sort of demystify that viewpoint.

So here's what actually happened in Paxton and why that's going to tee up age verification for not just internet services, but, you know, your AI services. And not just pornography, but anything that the state declares is harmful to a minor, right?

So, what Paxton actually did is––and I, and Alan, you have an excellent article about this in the Atlantic––but what it did was it undid decades’ worth of precedent back in the nineties, right?

And it did––it, very doctrinally strategically, in my opinion, so what it said first was it said the internet is actually not special. Online services are not special. You know, they, we should treat them just like offline services.

And that was sort of part one of breaking away from what we had, what, from precedent, right? The nineties––we treated internet services, it's a unique communication service.

As they should be treated, because unique communication services now, like AI, need to be treated differently. We need, we have different considerations, different social harms, different use cases than a brick-and-mortar store.

But the court goes and says, ‘Nope. They're a lot like offline media.’ Right? And that, that was a big deal. It was a big break from precedent. And then the second thing they did was they said, well, because they're like offline stores, you know, the law that we're talking about, which was Texas's age verification law requiring internet services to use age verification for online pornography, they had basically said, well, that actually operates like a content-neutral law. That that has nothing to do with viewpoint, right.

Again, you can insert, you can take online pornography and you can shift that out for any type of content that's harmful for minors, because if the court thinks that the pornography law is content-neutral, then you can apply that content-neutrality to any other type of law that comes in.

Right? So they said it's content-neutral. We don't need to deal with content, right. And then the last thing they did was they said, because it's content-neutral, we're gonna apply “intermediate scrutiny.” And I put that in quotes because really what they did, in my opinion, and maybe this is controversial, they actually applied rational basis review and called it intermediate scrutiny.

And how do we know that? Because even in the cable cases, the social, the, the, the cable cases, right? So we think Turner, for example, in the Supreme Court, when intermediate scrutiny was applied, the court still forced the government to come to show with evidence how the means that they were providing did not substantially burden speech, they still had to do it under intermediate scrutiny.

And what did the court do in this case when they had amici telling them that this is going to burden substantially more speech than, than other methods for age verification?

The court said, we don't have to consider that. And the court said, we don't need the government to show us that there are, that this is the only, we don't need the government to show us––it's just one means, but we don't need them to show us that that means doesn't substantially burden speech. They still have that burden in intermediate scrutiny.

Alan Rozenshtein: And, and just to clarify for, for those who, whose con law class was, you know, a, a long time ago, rational basis here refers to the idea that you know, as long as the legislation is not basically completely insane, which is that there is some rational basis that the legislature could have had––not even needed to have––the courts will allow it.

So basically, you're arguing is that, like, they call this intermediate scrutiny, but really they applied, in effect, this “is it completely insane or not?”

And like, no, it is not completely insane to put age verification on pornography sites. And so, that's essentially why the court, or that that's the standard that the court used to evaluate the constitutionality of the law.

Jess Miers: That's exactly right. And that's what they said in the, in the piece. They had said, well, we don't need, we don't need to see further.

Whether the means that you're provi––you're proposing age verification, whether it does burden substantially more speech than necessary, that is still a requirement under intermediate, under rational basis, not so much on rational basis. They could say, we will take Cong-, Congress at its word like they did in the TikTok case, but I digress on that.

Right? So. They did the same thing in this case. They said, we'll take Congress at their, at its word. If it's important for kids, and Congress says it's important, or state said it's, it's important for kids, we're gonna take them at their word.

Okay. So that sets up a blueprint now for any age verification law that comes into play from the states, whether it has to do with pornography or content, content harmful to minors, they now have the blueprint to say the internet is not unique, it's a content-neutral law, and the government doesn't need to prove their means is burdening substantially amount of speech.

And that's why I am super concerned, when I walked away from that and folks said, oh, it's cabined to porn, it's cabined to porn. It's not. And in fact, we're starting to see that in the age verification world.

We're starting to see states store sort of kind of come up to that line now, and say, well, maybe not just online pornography, but potentially harmful content. This blueprint fits and works for AI as well. So that's my concern is that, you know, number one, I think these AI companies are up for a stark reality when they realize that, oh, if our machine gets this wrong and says that a kid is an adult, they can be sued for negligent product design.

And they are being sued for that today. And number two, the states are catching on that the government just gave a full green light to mandate age verification, not just for porn, but for any types of content the state does not like.

Alan Rozenshtein: All right, let's talk about the,the lawsuits. We have these three lawsuits––two against Character AI, one against OpenAI.

I only wanna focus on the earliest the first of these lawsuits, Garcia versus CharacterAI, because that's the one where we have the most––we, we have some, some judicial rulings.

So, this is again, one of these tragic cases of, of a minor talking to CharacterAI. And it ends in the minor’s suicide.

And so the, the––I think the mother or the family of the minor sues CharacterAI. CharacterAI does a motion to dismiss on both First Amendment grounds also on kind of tort negligence grounds. And the court rejects those arguments largely and allows the lawsuit to continue.

Obviously, there's gonna be a, a long time before we know what's gonna happen. There's appeals, there's a bunch of stuff going on. But it is notable, and I think it's particularly notable for the First Amendment holding here because the court said that––and it's an interesting way that the court framed it––the court said, “at this time,” I am not prepared––so it's not entirely clear what the “at this time” is doing here, but at this time, I'm not prepared to hold that CharacterAI speech is First Amendment-protected there.

I think there's a cite to the Netchoice case recently in which you have the Justice Barrett concurrence saying content moderation is First Am-protected, except maybe algorithms are not, but I'm not, I don't know yet.

So, so there, there's a lot of confusion here. Jess, obviously I know you've thought a lot about the First Amendment issue. I, I'm curious what, what you thought. I, I will say, and I'll say this as someone who's actually tends to be somewhat skeptical of broad First Amendment claims from Silicon Valley I was quite surprised at the court holding that there's potentially no first Amendment issue at all.

And I'm just kind of curious how, how you, how you both think about that. And, and also how you think of the alternative holding, which is, okay, well, if there is a First Amendment issue here, what does that mean for being able to impose liability on these companies through these private tort negligence lawsuits?

Jess Miers: Yeah. So, I will kind of say I'm, I'm––my first big law review article, I am writing about this topic. I am my, the thesis is that, of course, generative AI speech is protected by the First Amendment.

But I'm coming at it from sort of a different perspective, ’cause what I, what I will say––and, and from reading a lot of the literature on this, thinking about, like, what are the, what are other academics, sort of, are legal academics saying about this?

I actually think it's not that controversial. I think there is sort of this underlying agreement that, yes, it's speech, the, the chatbot outputs. Now, whose speech is it?

I think where it––is, is where there's some controversy, where, where the sort of argument is taking place. As far as I know, I think I'm one of the few, if not only––I always have to caution myself on this 'cause there's always someone who's written something, so I apologize.

One of the few folks who's coming out from the perspective that the providers of ChatGPT-type products, the providers of generative AI, CharacterAI in this case are actually more akin to First Amendment publishers.

And so the idea being that I, what I've done is, I've actually unpacked, I've gone very––

Alan Rozenshtein: Sorry, publishers as opposed to speakers?

That's the distinction you're making here? Publishers as opposed to what?

Jess Miers: They're speaking as publishers. And they're, they are, they are taking on a role, they're taking on a publishing function in the First Amendment, akin to newspaper editors if we wanna go there.

And where I'm coming from with this, I've done a very deep review of the computer science literature on how generative AI works. And what I've found from my research is that there is a huge disconnect taking place right now in the courts.

I think the Amy Coney Barrett opinion where she says, well, when it's done is she's kind of hinting out if it's done by machines and it's not speech. I think the CharacterAI opinion, I think some of the other opinions that we're seeing come outta the courts where they're trying to make this distinction between, well, if it's machine-made or if it's human-made, that's what that's when we, we decide whether it's speech or not.

And I think that's kind of where the CharacterAI court came out on it. And the reality is that every single aspect of generative AI, from start to finish, is human-made.

What we're seeing is an obfuscation of humans, so as we, as our systems get better at automation, we see humans move further into the shadows, but there are human publishing and editorial decisions that are made every single step of the way.

The way in which the data, the pre-training data, is fed to generative AI, for example. That is an entirely editorial process where they decide. What kinds of data, when to schedule that data, how much of a certain type of data, they're making decisions at the very beginning as to what is going to be the source of truth for generative AI.

And then they move into things like fine-tuning. And fine-tuning is very hands-on what they do, what these developers do is they make decisions about, okay, if a chatbot were to have a question about climate change, here are the appropriate ways in which that a chatbot could respond based on climate change.

And they have already pre-trained their model on what the developer has decided is the central truth about climate change. And that's why you get these differences in outputs between Grok, for example, or between ChatGPT or CharacterAI. It's all human editorial choices.

And so when I saw the CharacterAI opinion, in my opinion, what I'm seeing is sort of this, what we used to call the Eliza effect, this idea that humans, they, they sort of project onto machines. These, these notions of the machine is alive or the machine is telling me what to do, right? We sort of see this Eliza effect, or, and a––

Alan Rozenshtein: ––Eliza, if I recall, was this like very, very early, well, from the eighties or something––like super

Renee DiResta: Early, early, early, early eighties.

Alan Rozenshtein: And like yeah, basically you would say something and, and it would be sort of append ‘yes, that's interesting,’ ‘what do you mean by mm?’ and, and it even like––pretty dumb, kind of like a dumb algorithm like that people got like really into it.

Jess Miers: Yes. And the creator of Eliza, that chatbot at the time, it, it's actually credited as being one of the, I think the earliest chatbot, Joseph Weizenbaum's product. And it was 1963, I believe.

Alan Rozenshtein: Wow. That's earlier than I thought.

Jess Miers: Super early and what Joseph Weizenbaum said. He's been taken outta context on this. Sorry, I'm gonna nerd out a little bit.

He became known as this sort of the, the anti-AI. He became known as this guy who created this chatbot, who helped create this chatbot, and then he decided he hated AI.

And folks use that to say, well, AI is dangerous, and that's not why he was anti-AI. He states this in his paper. He states this in, in several of his books, that he says the reason he is anti-AI is because he has firsthand witnessed the way that humans project onto AI. And he called it the Eliza effect.

And so I'm seeing sort of this Eliza effect in the courts, that when you bring in AI, we lose all logic and reasoning. No, it is not a machine speaking, it is the amalgamation of words that have been very precisely picked and thought about and chosen since the very beginning of AI development.

And so the outputs are reflections of those viewpoints, of those editorial decisions. So that's my take on it, is that yes, of course the outputs are speech. I think the majority of folks agree at speech, and I argue it's very much the speech of the developers, of the providers of that product.

And Alan, I know we're gonna have an interesting discussion about what that means next.

Renee DiResta: So I'm curious. Okay. So as the non-lawyer on the, in the, in the chat here, so I, I guess what I think about then is like all the defamation cases that have been dismissed.

So, because the hallucinations related to personal reputations are a pretty major issue that have, that, where people have been accused of being sexual harassers and various other things, and that has been sort of summarily dismissed in most cases. So there's interesting implications for what you're saying there.

I guess the other area though, from a, from a computer science standpoint here though, I, I see what you're saying about differences in the sort of, you know, the extent to which humans are curating via the system prompt and so on and so forth.

And, you know, we, we've all seen what that's done with just for the ease of, audience understanding here, with like Grok moving into MechaHitler immediately, right? But there is a, there is also that––what the AI engineers will tell you themselves, which is that they themselves are often surprised by what the models produce, right?

The, the sort of leap, there, where the things that it chooses to string together sometimes exceed what, you know, the sort of second-order reasoning, if you will, that that they feel goes beyond what they were expecting and, and that I think is the area where I, I guess you lose me from Eliza back in the day to where we are now. Particularly because that leap is actually, in fact, what they're going for.

Jess Miers: I do have thoughts on this, if I can take a second. So I went into this as well from the black box perspective, and I will be very honest.

Computer scientists, the ones who are actually doing the research and the work and not the folks who are tied to these companies, the ones who are actually doing the research and the work, do not agree with that statement.

They do not agree that it's a black box. They do not. In fact, I have read several of these papers. The computer scientists are saying, actually, we know exactly––

Renee DiResta: ––why this has happened. Yeah, and I think that there is like the whole stochastic marketing, kind of, you know, fear mongering is, is is overstated.

I'm, I'm not, I'm not like anchoring to that. It's more that, though, it's more that the point is the exceeding of the training. If that, if, if, if I can make that argument. Like the goal.

Jess Miers: The goal. Yes. And so the goal is for––there's a bunch of different benchmarks and goals.

I won't get super deep into this here, but one of the goals, to your point, is to get it to a point where it, it is engaging in sort of this like human-like––however we wanna debate that––logic and reasoning, but for the most part, even computer scientists don't like the word hallucinations because it conveys this sort of misunderstanding about the AI that is actually not true.

The majority of what we call hallucinations actually are pretty explainable. They can be traced to very specific and precise design details in the product itself.

And so when I hear black box, I am typically hearing that from the companies. And let me just be, sort of, controversial, I guess. It makes sense they're making that point, right?

Because if they're going to be sued for defamation, if they're going to be sued for some of these these claims where knowledge is is the factor here, what you knew and then when you chose to speak, well then the, I don't know, the black box, it's magic. It makes things up. I can't tie it to a specific design. That works in their favor for liability purposes.

Renee DiResta: Yeah, no, no. You––right. I mean, one thing I noticed––I would never write a, about this because there's no way to prove it––but one thing that's very interesting with some of the hallucinations, about people in particular is that if you ask, if you put two names, particularly a male and a female name into Google, and you look at the automated results that the AI generated results from Gemini, oftentimes it will try to ascertain whether those people are in a relationship and it will hallucinate a relationship.

And that's because that is very common thing for people to search for in the context of celebrities. And so if one of those people is semi-well known and will actually try to––it'll actually return some sort of fabricated relationship, even if it's like two journalists and you're trying to find an article that one wrote about the, the name, the second name or something like that.

And, and so that is like a very common pattern and you can kind of intuit why it might be doing that. So it's, it's not, it's not a, you know, I, I understand like why that, why that result is a thing that actually makes sense from the standpoint of the query. But that doesn't change the fact that it's happening.

So, so this is where that's really agree. When we get to the question of if we are treating it as speech, as an output, I, I do wonder a little bit about, how this shifts some of the thinking that we've had where a lot of those cases have been dismissed, with the exception of Meta settling with Robby Starbuck and turning him into an AI advisor.

So maybe you guys wanna comment on that?

Jess Miers: No, I think that's exactly where, so I agree with you on that.

And I think, you know, I, Alan, will tell you, I'm sure folks listening to this podcast have heard, you know, back in the very, very beginning of this, I had written a, I guess, a TechDirt post about, like, I, it's, I think it's titled “Yes, of course Section 230 applies to ChatGPT and––apologies, Alan, I think we're skipping ahead here.

But Section 230, of course, for listeners, the law that says that websites and users of or websites and users of interactive computer services, we can just say websites are not liable for third-party content. So essentially it's y––if you defame somebody on Twitter, I guess X now, if you defame somebody on social media you, yourself, the defamer are liable, but we're not gonna hold the social media company liable.

And I had put out a point, again, this is very beginning days, early days of ChatGPT and I had said, well, you know, Section 230 of course applies. And now looking at it a few years later and acting as the academic that I am, and growing and learning, I think I, I, at the time when I wrote that, I didn't separate the fact that I wanted Section 230 to apply to these generative AI programs, and I was willing to sort of find the argument to make it apply to generative AI programs.

Because, in reality, what I am seeing that, to bring Renee's point to its obvious conclusion here, is that if these services are publishers––and I think I have learned pretty, pretty concretely that they are based on their publishing activities that give us generative AI––then they are going to be held liable in tort for the, for the outputs that they put out.

Now the question is, do we need to create, a sort of a––an immunity or a safe harbor type approach that we created for the internet services, recognizing early on that the internet has this kind of great potential and we need to protect it.

Do we need something similar to ensure that generative AI can continue to exist? Because the reality is that these lawsuits are gonna continue happening, and it's kind of a mix of, yes, they’re speech, they’re outputs, but also in the way that you use these chatbots as well.

There's sort of this like two-party here approach to the outputs, and so do I think Section 230 today applies to generative AI? I don't think Section 230 applies to like anything anymore.

Wow. Hot take, right? I think we have Swiss cheesed Section 230 to hell. I don't know what the hell it applies to anymore. So probably certainly not generative AI. But the real question is, do we need something like Section 230 for generative AI?

And I kind of think we do, if we want these products to continue to exist or if we want more of these services to exist. More than OpenAI and Meta.

Alan Rozenshtein: So I, I just wanna end on this question of, you know, what we're gonna see in the, in the future. You know, Jess, you, you put out I think a very interesting proposal that we need to kind of Section 230 for AI.

I will admit, I, I think it is unlikely that that will happen given the current climate, and that if anything, we're gonna see a lot of regulation on the, on the other side both at the state and federal levels. So I, I'm, I'm curious, and, and we'll start with you, Jess, and then I'll give Renee the last word here.

You know, in, in the short to medium term, let's say 18 to 24 months, do you––what, what do you expect the legal landscape to be, both in terms of court decisions on the First Amendment issue, or alternatively on just the substantive tort liability question. And then also from state and federal legislatures on, on actual regulation of AI systems when it comes to their interactions with minors. So you, you first, Jess.

Jess Miers: Completely agree with you, Alan. I think it's wishful thinking.

You know, do we need a Section 230 for generative AI? I think yes, if it's gonna exist and we're gonna have more than one company. But I don't think anyone is positioned––I don't think we're, we're anywhere near positioned to be able to get there, unfortunately.

Where are things gonna go? I think we can look to the social media companies as our sort of crystal ball here. So, I had made that point. I don't know what the hell Section 230 protects anymore today. I mean that, I think that what we're seeing a lot of with the social media companies is that the courts are more willing to take on this sort of products liability, negligent design claim to frame algorithms, at least.

We saw that in the TikTok v. Anderson or Anderson v. TikTok case. And so I think there's more of an acceptance to treat these speech products under products liability. I think that has problems in itself I won't go super deep into, and so I think we're gonna see more of that, I think.

In the context of generative AI, it's either gonna go one of two ways. It's either going to be that it is not speech at all, the outputs are not speech at all, and these claims can proceed. I think that's likely.

Or, it is speech. We are gonna read this, reach this agreement that the outputs are speech, and as a result we're going to treat the providers of these products as liable under insert-your-favorite-publisher-tort.

I think it's a problem that we're doing this again under negligent product design when we have publication torts that could be better applied. But I think that's probably where that's going in the, in the legal litigation land.

When it comes to states and the federal government, I think that the federal government, when it comes to the Kids Online Safety Act, for example, KOSA could potentially have legs.

There's, there's been a lot of discussion about including AI under KOSA. I'm not sure if that's actually gonna happen. But I think there is appetite for KOSA. I think there is legal runway now thanks to the Supreme Court in FSC v. Paxton that I, I think it has better runway when it comes to whether it can survive a First Amendment challenge.

And I think these states––I think we're gonna see a floodgate of age verification laws, not just for pornography, but I think we will soon start seeing more of these quote-unquote harmful to kids, content harmful to minors type laws where it's gonna just keep extending and extending, extending beyond pornography.

And again, the Supreme Court has opened the door for that.

Renee DiResta: Yeah, I would––I'm, I don't think I'll speculate on the, the first part. I, I am not a lawyer and I dunno what direction things will come down on the speech versus not speech aspect of that. But I do think on the kids’ online safety, there's a ton of appetite, momentum for it.

We've seen a number of the state-level laws pass. We're gonna, you know, we're seeing different tech companies, you know, struggle with how to actually––which ones to take seriously, which ones to respond to. And you know, Bluesky, I think we talked about that on a prior Lawfare podcast.

But, I think that this, you know, the, the recognition even among just the parenting community, the, the dynamics––as these articles have come out just one after another, after another, after another, there is, I think, rising awareness that these things are not great and people want to see something done.

From a standpoint of whether that's regulators or media literacy, there is an increasing distrust, and I think you are gonna see that, that public pressure and that desire to see something get done, which doesn't always lead to good regulation, but it does generally lead to momentum for something passing.

So I also wouldn't be surprised to see something AI get worked into KOSA or one of the other big bills that will come and try to sort out the patchwork of things that are happening at the state level.

Alan Rozenshtein: This does remind me of the great phrase describing politicians. We must do something. This is something, therefore we must do this thing.

But when, when inevitably that happens, we will obviously continue the conversation. Jess, Renee, thanks so much for coming on the show.

Jess Miers: Thanks for having us.

Renee DiResta: Thank you.

Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support.

You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and Bluesky and email us at scalinglaws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo.

Our theme song is from ALIBI music. As always, thank you for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She is a contributing editor at Lawfare.
Jess Miers is the Legal Advocacy Counsel at the Chamber of Progress, primarily focusing on the intersection of law and the Internet.
}

Subscribe to Lawfare