Cybersecurity & Tech

Scaling Laws: The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)

Kevin Frazier, Gus Hurwitz, Neil Chilson
Tuesday, September 30, 2025, 10:00 AM
How can academics positively contribute to AI governance? 

Published by The Lawfare Institute
in Cooperation With
Brookings

Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.

The trio recorded this podcast live at the Institute for Humane Studies’s Technology, Liberalism, and Abundance Conference in Arlington, Virginia.

Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-tower

Learn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/

Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alan Rozenshtein: It’s the Lawfare Podcast. I'm Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and a senior editor and research director at Lawfare.

Today, we're bringing you something a little different: an episode from our new podcast series Scaling Laws. It's a creation of Lawfare and the University of Texas School of Law where we're tackling the most important AI and policy questions from new legislation on Capitol Hill to the latest breakthroughs that are happening in the labs.

We cut through the hype to get you up to speed on the rules, standards, and ideas shaping the future of this pivotal technology. If you enjoy this episode, you can find and subscribe to Scaling Laws wherever you get your podcasts and follow us on X and Bluesky. Thanks for listening.

When the AI overlords take over, what are you most excited about?

Kevin Frazier: It's, it's not crazy. It's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.

Kevin Frazier: Who's actually building the scaffolding around how it's going to work, how everyday folks are going to use it?

Alan Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions have to be figured out, and––

Alan Rozenshtein: Nobody came to my bonus class!

Kevin Frazier: Let's enforce the rules of the road.

[Main episode]

Kevin Frazier: Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI, policy, and, of course, the law.

I'm Kevin Frazier, the AI Innovation and Law Fellow at Texas Law, and a senior editor at Lawfare.

It's great to share a live recording of the podcast from the Institute for Humane Studies Conference on Technology, Liberalism, and Abundance. I had a great conversation at the conference with Neil Chilson, head of AI policy at the Abundance Institute, and Gus Hurwitz, senior fellow and CTIC Academic Director at Penn Carey Law School and Director of Law and Economics Programs at the International Center for Law and Economics.

Our collective experience studying AI from different perspectives has readied us to call out a few flaws in our respective disciplines, and to identify where institutional incentives may discourage the sort of collaborative and timely research made necessary by AI. To get in touch with us, email scalinglaws@lawfaremedia.org. And with that, we hope you enjoy the show.

All right. Well, huge thanks to the Institute for Humane Studies and Paul and Steven for the opportunity to get to record this podcast. I'm very excited to have a live audience, an engaged audience, for this part of the Technology, Liberalism, and Abundance conference. And I couldn't have picked two better folks to point out all the things we're getting wrong about AI policy than Gus and Neil, so thanks, the two of you, for joining.

Gus Hurwitz: Great to be here.

Neil Chilson: Yeah, thrilled to be here.

Kevin Frazier: So Neil, let's start with you. We're three years into this era of AI. We've seen tremendous technical progress. If you look up any benchmark, most folks would be astounded by the progress we're making on the technical front.

But from a policy perspective, things are, to be polite, muddled. Here's an easy question: why?

Neil Chilson: I think the, the first reason is because AI sounds scary. And in particular, the mode in which ChatGPT was launched, that these large language models was launched, was as a, a chatbot.

And I think that captured a lot of consumer attention. That obviously drove a huge adoption spike in this technology. And also, it looked like a person, right?

And so you're talking with it. And I think that leans into all the cultural concerns that people had already, seeded by movies like The Terminator, that these, that these are somehow going to replace humans.

And that sort of cultural fear has driven a lot of political interest in regulating this space. I think so much of that goes away––well. I like to think, in my mind the hypothetical where ChatGPT wasn't ChatGPT, it was like BiologyLLM or something, right?

Like, nobody would be talking about it the same way. We also wouldn't have the huge rush of investment and innovation in this space, probably. But it would be a more, a more calm time, in some ways. It's kind of a weird artifact that this came out as something in language that normal people can use that has made it so salient today. And I think also part of what makes it so useful, so that's part of it.

The political appetite here is also driven in some ways by the fact that, because of that label, AI is both in the context of ChatGPT as a conversational tool, but is such a broad category, an actual category of technologies––that makes this a very complicated space.

And so I often think, I often say, that it's actually easier, it would be easier for politicians to grasp how broad artificial intelligence is in its applications if instead of saying, calling it artificial intelligence, they just called it advanced computing. Historically, that has been what artificial intelligence is. It's the cutting edge of, of how we use computers. And in fact, many of the algorithms that are on this, at one point, were studied by people that would've called themselves artificial intelligence researchers.

And so if we think of it as the cutting edge of computing, it gets easier to think, wait, so we're just talking about regulating computers, advanced computing. Like, oh, that seems pretty abstract. How do I drill down, down on that? I think that's a really good starting step for, for legislators or for people who are concerned about AI, is to say like, what does it mean to regulate computers?

Kevin Frazier: Right, so we're not reinventing the wheel by introducing this iteration of AI. We're just building iteratively and incrementally on past technological improvements.

Gus Hurwitz: Oh, can, can I jump in on that?

Kevin Frazier: Gus, I was coming to you, don't worry. I want to, I want to hear all about it.

Gus Hurwitz: So, why in the world should we have AI policy? I think we, we have this whole discussion at the––the frame of your question, Kevin, is, where is AI policy? It seems really far behind.

Well, I don't think that when Bell Labs was developing the transistor, that we had ‘transistor policy.’ I don't think that there's internet protocol policy. I don't think that when Watt was developing the steam engine, there was ‘steam engine policy.’

Not to say that law and regulation weren’t involved in all of these. I mean, internet protocol: heavily developed under the Department of Defense or Department of War. It might––the transistor Bell Labs, heavily regulated by the Federal Communications Commission. Even go back to the, the steam engine: tort law in the background and intellectual property law in the background.

There are all these law things in the background. But we didn't have steam engine policy or transistor policy. Why in the world should we have AI policy?

Neil Chilson: It gives me a good title, Gus.

Gus Hurwitz: Yes, it's a ton of work for so many of us. And a short anecdote: I, I just got an email from one of my colleagues who does criminal law, and she's been invited to an event to talk about AI and criminal law.

And she sent a bunch of us an email saying, so, I don't really know this field or this technology, and it seems like everyone's just saying a lot of the same thing, and most of it's nothing. And can you help me figure out, is there something actually substantive to say about this?

And I think the answer is, in many ways, no. Which doesn't mean we shouldn't be talking about these things. These are important. They're changing things. They're raising hard questions. We should be talking and thinking about this stuff.

But to say that it means we should have some AI policy as some capital letters thing at this point in time, I, I don't think is, a priori, something that should be the case.

And in fact, one of the things that I think we'll discuss on this during this hour is the messiness, the lack of coherence to the discussion. I think that's a feature, not a bug. If you look at how big things in the law tend to happen, the Copyright Act, the Telecommunications Act, and any big piece of legislation that moves, it tends to be because there's some crisis that drives it, that brings various stakeholder groups, various interest groups together, that gets them to do something to respond to a thing.

And oftentimes they respond poorly. Sometimes they respond––sometimes this catalyzes and crystallizes literally a decades-long series of discussions that have been happening over a period of time and leads to something good in legislation.

But right now, the discussions on the law and policy side that we're having, they're scattershot. They're a range of all sorts of, ‘I'm concerned about this. I'm concerned about that.’ And if you look at the laws that we are actually seeing, more often than not, I mean, with the state law stuff, ‘let's pass a law that says using artificial intelligence to do this thing that's illegal is illegal.’

And that's like 90% of AI laws that we're seeing. We don't need a law to say using this new tool to do an illegal thing is illegal. So, the, the fact that we've got all these concerns is telling.

To build on something that, that Neil said on the political moment and political economy of all this, it also bears note, and we've, we've seen so many legislators talking about this there's a real sentiment among many that we missed the boat with the modern internet.

So not the development of the internet protocol, the early stuff, but in the 1990s, with Section 230 in particular, there's a real sentiment that we should have been more concerned about online harms. And with Section 230, we made it impossible to go after these companies. We created immunity for them.

Agree or disagree with that proposition, that sentiment that we missed the boat with this previous recent major generative technology is driving a lot of the discussion for, ‘and we can't do it this time.’

So we, we gotta hurry up and do something. We don't know why. We don't know what, but we can't not do something.

Kevin Frazier: So we're responding to this sort of, what I'll refer to as a social media hangover of, ‘we got it so wrong on Facebook, we got it so wrong on Instagram. Let's be reactionary, let's be harsh. Let's be as proactive as possible to try to nip this in the bud,’ whatever the harms may be alleged.

And we're also seemingly orienting ourselves around a vibes-driven policy culture where we're swinging from having P(doom)s to then saying ‘We need to beat China,’ and then back and forth. And yet you all are also telling me that we're somewhat detached from where the computer science perception of AI would actually lead us.

So who's to blame, Neil? We've got K Street, we've got Wall Street, we've got the Hill, we've got state lawmakers, we've got the ivory tower. Who's responsible for perpetuating this perhaps unnecessary focus on AI-specific policy?

Neil Chilson: It, it's a great question. And I'll, I'll say like, a lot of them are repeat players.

So, when we hear the people talk about algorithmic discrimination or other types of, sort of, ethics in AI, those––a lot of those academics, a lot of those interest groups exist-, like, long pre-existed ChatGPT. And they were working in other spaces, and so a lot of them are sort of continuing on. I will say the one thing that's new in tech policy––I mean so many of these debates, and that's why so many of these debates are just rehashes of all tech policy in the past.

IP––we have, you know, misinformation, disinformation, we have privacy debates and they're all in being reframed in the context of AI. But one interest group that's new and somewhat novel in this space and comes from an unusual space is the, the sort of doomer contingent. You mentioned P(doom).

There weren't––there was not––I can't remember a technology that came out, at least in my lifetime of doing this policy, where there was a heavy contingent, a very vocal contingent, one that got a lot of press who, was saying that this technology will kill humanity.

Gus Hurwitz: I don't remember a Teletubby army or anything like that.

Neil Chilson: Exactly. Well, and, and even more so, even––I'd never seen that happen from people within the industry. And so in many cases, a lot of these, not the most vocal ones, but the, the vibe is there, even in some of the AI companies, that this is a technology that is so powerful and so influential that we ourselves are going to call for regulation.

The first Senate hearing after ChatGPT launched had, you know, Sam Altman up there saying like, ‘we need to be regulated.’ That did not happen with the internet. Congress didn't even know what the internet was for a really long time. And so that is unusual in this political dynamic, and I blame them a little bit.

Kevin Frazier: Gus, point fingers, who's, who's to blame?

Is it this narrative by the industry itself that perhaps inflates the idea that AI is something that's new and novel and needs to be invested in by billions and trillions of dollars? Or are you going to point out another group of actors?

Gus Hurwitz: So I, I want to enunciate very clearly with what I'm about to say.

I, I have seen the problem and the problem is us. Not Gus. It is us, we academics.

The way that academics and the tech policy advocacy community, over the last generation, in response to the rise of social media and the internet, have situated themselves into these policy discussions, into the policy shops at these companies, sometimes.

This is a community that, everyone in it got into it because they believe in some form of doom. It doesn't need to be P(doom)-style doom, but the technology is harmful. It's causing problems. It is bad.

We, we need––and I'm not against trust and safety teams in principle, but the, the people who have driven the trust and safety narrative, the people who've gotten these positions, who've been raised into positions of power, they believe that the technology is first and foremost harmful, and we need to design it not to be. Not that the technology is beneficial and, on the margins, we need to prevent it from causing harm.

And those are two very different perspectives to come into these discussions with. I, I also do think there's a lot of public choice, political economy sort of stuff going on here.

Sam Altman, when he comes in and says ‘this technology is, it has the potential to destroy the world, you should regulate us,’ two things going on there.

First, he's saying, this is an incredibly powerful technology power. We're doing huge, incredible things. This is so powerful. It could not just change the world, but destroy it.

That's a little self-serving. It’s a little ‘I’m God’-ish, that you've got going on there. And then the ‘please regulate me.’ I'm already the established player, so yes, please regulate me but also make it harder for others who aren't already established to become competitors.

And also please regulate me, regulate us––with us in the room to partner with the structuring of these regulations so that we can design them in a way that a forestalls further, future, less good, less informed regulation and also potentially helps to forestall regulation coming from other countries.

And a lot of this isn't bad. Public-private partnership, getting the technologists in the room, there can be a lot of good there.

Going back to the transistor: AT&T, was heavily regulated by the federal government by the Federal Communications Commission. A lot of what they did was in partnership with the federal government, in partnership with DOD in particular. They helped to make the American century that was.

And that wasn't just AT&T engineers saying ‘we're going to go off and work in our offices and come up with great ideas.’ It was, ‘we're going to go off and work with our offices, coming up with ideas that are intended––in consultation with government stakeholders in order to develop technologies that will serve important public, national security, private interests.’

That’s a different sort of model. So it's not entirely bad, but it also is self-serving.

Kevin Frazier: So I want to dive so deeply into all of the manifold flaws with our ivory tower and the incentives for academics and why they're not lending themselves to better public policy.

But, Neil, I want to come to you first. Because even if this technology isn't as novel or transformative as it's been hyped by some, we're still having a lot of conversations about kind of reimagining how we would devise and oversee tech policy generally.

And given that you've been in the proverbial belly of the beast––that is, the FTC––what sort of institutional changes or capacity challenges are regulators facing that you would want to address to try to result in better policy, whether it's AI, quantum, or whatever comes next?

What are some of those foundational issues that we should be challenging at this moment?

Neil Chilson: Well, one of the biggest institutional challenges that government faces obviously in this space is just knowledge about the technology. And so there's––getting, getting knowledge about how this all works is, is a key part of, you know, not screwing it up.

One of the biggest reasons that's a challenge is because there's a sort of self-sorting going on in. The tech policy space––and this has been going on forever, so this is not unique to AI, and in fact, in some ways I'll, I'll get to that, is maybe a little bit better in AI or we have a chance to make it better in AI.

But historically, if you have a technical background––which gives you more cred to the policymakers, often––if you have a technical background, and you believe in markets that markets deliver good things for consumers, or you just aren't political––like, you're out there building things, you're not coming to DC asking for things, right?

And so, so that means the people who show up in DC who have tech––technical backgrounds, tend to be people who are very skeptical of the outcomes of markets and who think that there is a need for government intervention. And that pipeline comes from academia in part to that in, in some ways.

Gus Hurwitz: I’m chomping at the bit––

Neil Chilson: ––together conversation, but also, but also, but what that means is that when government hires people who have tech backgrounds to work on policy, they tend to overwhelmingly be people who are not classically liberal.

Maybe, I think even unrepresentatively so, compared to like, say, general lawyers overall. And so, I––to me, that's a big problem in the tech policy space.

Kevin Frazier: Gus, why aren't we filling this talent gap in the government?

Because it's long been acknowledged that this sort of technical capacity has been a challenge for the government well, before AI came about. And we've had programs like Tech for Congress. We've had various fellowships emerge.

Why aren't those sufficient? Why aren't we seeing more interest? How do we create a better pipeline? Or should we create a better pipeline?

Gus Hurwitz: Pipeline. Pipeline. Pipeline.

Kevin Frazier: Yes. Build, baby, build.

Gus Hurwitz: It well, you need to know where you're going to drill first.

I guess build, baby, build, drill, baby drill. You need to know where the oil is first to know where you want to drill. And Neil, I am so happy and also a little frustrated that you framed your comments basically in terms of a selection bias issue. Because this is exactly the issue. And this is––

Kevin Frazier: So Neil's the issue. Yeah.

Gus Hurwitz: This is my own personal narrative. So, at the risk of being too autobiographical and explaining a something visual for folks on the podcast, the shirt that I am wearing, everyone in the room came up and talked to me about it during lunch.

I made a really big mistake early in my academic career, which is I became a law professor. Now, that's the sort of professor that I could become, because I went to law school. And I was––as Eugene commented earlier, law professors are great.

We don't have PhDs, usually. We don't need to do all that hard work actually learning stuff in order to get to the academy. We just have to have strong opinions. Come back to that in one second.

But I, I went to law school coming from Los Alamos National Lab in––coming out the late nineties, early 2000s, the copyright wars and everything, I wanted to be the, the lawyer who could help engineers do what they wanted to do.

And that's why I thought all law students' interests in technology were going to be. But––so then I, we became a law professor and my, my ideas as a law professor was, I want to equip lawyers, law students to become lawyers who can help engineers and help build the future and be advocates for progress and all this great stuff.

But as Neil said with his comments, or suggested with his comments, who interested in technology goes to law school? People who think there are problems with the technology, it's creating harm, and that law is the solution.

And that, that's––you go to law school because you see problems and you want to use the law to fix them. You don't believe in the technology. You believe that the technology creates problems.

So that's not true of every law student, certainly, but just the general valence. If you want to find students and future academics who believe in the technology, where do you go? To the engineers, to the tech programs, to the CS departments, the people who go to school to build––learn how to build the future, how to design, develop these technologies.

So basically what my shirt says, for those in the room who don't understand differential equations: the benefits of teaching engineers a little bit about law and policy significantly outweigh the benefits of teaching law students just a little bit about technology and engineering.

Go to where the audience is, where you can have the most impact, you can get 85% of the benefits, Pareto principle, with 15% of the work.

Teaching engineers a little bit about policy, teaching them how the legal system works, how government works, how the administrative state works, basic economics, basic public policy stuff, really basic stuff––you can equip them both to defend themselves against the lawyers, as I now talk about it, and to advocate for themselves and their technology really easily by going to the engineers and working with them and teaching them.

Neil Chilson: Can I build off of that just for a second?

Kevin Frazier: Of course. Build baby, build, you know, just building.

Neil Chilson: Of course. Yeah. And it connects back to the thing that I forgot, which is that––

[laughter]

Kevin Frazier: ––there we go. We knew we had, you had it in you––

Neil Chilson: Yeah. The other thing that's really interesting about engineers who come into––not necessarily the ones who come through the, the law school, because they have, they've heard a bunch of this stuff. But what we see in the AI space is a bunch of engineers who are now seeing––because this has become such a political issue, there's a bunch of people who are building this technology who are suddenly interested in legal solutions.

And there's a big chunk of these––I wouldn't say they're all doomers, but like in that crowd who think of law as a first solution, and––but they have a technical background. They haven't done a lot of public policy.

And what we've seen, they bring a particularly risky mindset to this in some ways. In that, their engineering training often, especially in computer science, is around systems that are discrete. They're deterministic. You can debug them, you can break them down into parts. You can look at the part and if you understand the part, you can reassemble the whole thing.

So they're complicated, but they're not complex. And so they think of law and policy as essentially––their mental model is an engine or maybe even a computer that you can debug.

And so when they think of code––or when I should say, when they think of law, like legal code, they think of it as computer code. And that if you write it, it's going to be interpreted and applied the way that you intended it. And that nobody would ever, like, misuse it or anything like that.

And so it's been really productive––I totally take your point, it's been really productive for me to engage with those folks and to run them through exercises I call AI legislative red-teaming, where we––I assign them a role, right, and they, they take on like the state AG. And then they develop their own, like, most self-interested position that they could. And then they look at a law and they say like, how would I abuse this law?

And some of them, you can sort of see the scales fall from their eyes. They're like, ‘oh wait, you mean like somebody might try to misuse this law?’

Unknown: Mm-hmm.

Neil Chilson: And so it's really fun to go through that with engineers who have, I think––just teaching them a little bit about that helps them understand what, what might be good and what might not be good solutions for the problems that they're often eagerly trying to solve.

I think there's hope in the academic legal space as well, but it, it is harder in that space than it is maybe reaching into the, to the engineering space.

Kevin Frazier: So, to clarify though, I gotta clarify though. Yeah. So, Neil, you're training criminals? Is what I heard.

Neil Chilson: I'm, I’m training them––I mean, I don't know if you think state AGs are criminals, some of them are.

But I'm training them to think like state AGs, right. Or to think like a self-interested incumbent, or to think, like, the CEO of an interest group or a consumer protection group whose job is to, you know, make sure that you get more donations and can hire more people the next year.

So, having them think through those different roles actually really helps 'em think, like, well, policymaking is really messy, it's not deterministic, it's a complex system, and there's a lot of feedback loops that we really need to be aware of.

Kevin Frazier: And there is just something I have to call out 'cause we, we've heard the story of Gus. Which was a great story.

But I just have to add, because I think it's important for folks who are considering the legal academy––to the extent they haven't been turned away from that from the first 30 minutes of this podcast––my own entry into the academy, I think, was marked by one fascinating experience where I show up at my law school and I ask, ‘alright, I really want to do interdisciplinary scholarship. I'm so excited to call up folks who are in the CS department who are doing Econ work or Poly Sci work. You'll give me recognition for that paper right when I go for applying for tenure?’

And the committee chair looks at me and he goes, ‘well, how do I know you didn't write 1% of that paper? And your co-author wrote 99%.’

And I said, please find me that co-author, because I would love, I would love to write that paper.

But there is no incentive for a lot of junior scholars, especially in the legal academy, to write interdisciplinary scholarship and to write scholarship in a way that people will actually read it.

Outside of Eugene Volokh, who I am so glad is in the room, there are very few people who have their law review articles read. My most popular download was on undersea cables that I wrote in the middle of law school. I wasn't even a, a professor at the time. And that's the paper that gets cited.

So, Gus, I know you have a lot to say, but one thing I'm particularly keen to hear about is––given your interdisciplinary work, given the cool stuff you're up to at Penn––if you could be czar of higher education, what levers would you push on, what things would you twist, to try to get us all being a little bit more productive or useful?

Gus Hurwitz: Yeah. So, I have a whole lot that I'm going to say about that.

Okay. First, real quick, Neil, first, I have to emphasize, double up, plus one your comments about teaching engineers a bit about ‘the law is not code.’

And this is something I spend a fair bit of time––I mostly teach engineering students nowadays, and it's something I've worked with them on. And it changes––a lot of them come back to me and say, this completely changed my understanding of what the heck is going on.

And it––again, this is Pareto principle stuff. This is not difficult to understand once you understand it. And also I, I have to add in for every one of Neil, there are 10, maybe 100 folks in the public policy space who come at from these issues, from the perspective of the ‘technology is dangerous, we have to use the law to fix it.’

And most of the engineers, most of the tech folks who come into the policy space, they learn a lot about public policy from folks who think about it very differently than we do, from folks who are going to teach them ‘yeah, we use the law to fix these problems and these are the things we should be concerned about.’

And it makes it––it poisons the well for having public policy conversations with these engineers.

Kevin Frazier: So, how do we fix the academy? In 60 seconds or less.

Gus Hurwitz: Yeah. So it, it's hard. And Kevin, you hit on one of the hugest issues, which is, incentives matter, and everyone has incentives.

And we can really start at the top of most universities: incentives, funding, getting federal funding, grant funding, bringing funding in to the departments, which goes down to the departments, which, which affects who they hire, how they recognize the work that they're doing for a tenure and promotion purposes. If I were to want to get a tenure-track position in an engineering program, I wouldn't want to, because they wouldn't know what to do with me.

They would want me serving on various engineering department-focused committees, doing engineering-focused work, supervising engineering-focused doctoral students, bringing in engineering-focused grants. And I don't do any of that. So they wouldn't know how to evaluate what I do, and that's hard. They, they have other issues that they're focused on.

At the same time, if you look at how a lot of the great universities of the world got started––if you look at the Chicagos, if you look at how Stanford became Stanford––they basically came in and said, ‘we want to be great. Let's go find a bunch of people that are just doing great stuff and hire them, give them tenure out the gate, and say, you all get together and do great stuff.’

And that's kind of the, the startup model, what you need to do. And we're not in a system––we're not in an environment where we really can do that sort of thing.

One idea that I tell folks––anyone that will listen to––and I just said why this won't happen, but I think it would be really great if engineering, CS, STEM tech programs were to start bringing in actual tenured law faculty who work in these areas to teach the law and policy of that field to the engineering students and to engage with the engineering faculty.

And this is for two reasons, it's a bidirectional relationship. First, it's going to create opportunities for the rest of the faculty to learn from this faculty member and also for the faculty member to learn from them.

If I teach––let’s, I don't know, I'll just randomly pick nuclear engineering. Nuclear power-related stuff. So this could be from an environmental law perspective or an energy policy perspective. In––a law school might teach nuclear energy-related topics. All of my co––none of my law colleagues are going to know a damn thing about the actual technology of my field. But they will pretend to.

They, they'll pretend to. Yeah. And frankly, I'm going to probably be pretending to, with most of the limited knowledge that I have. I'm not going to increase my knowledge. What I do learn, I'm going to learn from interested policy folks. And I, I'm not sophisticated enough to evaluate that.

It's a real pain for me to walk across the street and grab coffee with colleagues who actually understand the science and the engineering behind this. I'm not going to do that. They're not going to come have coffee with me.

So, I'm mostly going to be learning, basically, talking points as my substantive hard science knowledge about this field that I'm ostensibly teaching. If I go and teach the law and policy issues to the engineering students who are working on this topic, I'll have conversations with the colleagues who are actually designing these technologies and I'll learn from them. I'll learn the BS that most of the policy folks in the field talk about and why it is. And that will improve everything for everyone.

Might lose a bunch of friends that way. But maybe I'll make some new friends in the end.

Kevin Frazier: Gus, I believe in your power to make new friends, but we'll, we will test that down the road.

I do think it's worth stressing, also, that a lot of this is even architectural, just the way campus is designed. If you have to walk eight minutes in the Texas sun to go to the CS department and you're wearing a suit, I'll tell you, you're not making that walk unless you want to come in as an embarrassment.

I heard folks at Harvard Law School when they were invited to the Harvard Kennedy School, a 13-minute walk through historical Cambridge, nice and breezy, ‘oh, I'm not doing that. Why would––and I can't find parking,’ so on and so forth.

And so, unless you're willing to go to the foundational roots of, ‘we really care about interdisciplinary work, we're going to bring you together, we're going to put you in the same building,’ so on and so forth, it just won't happen.

Gus Hurwitz: But I, I, I can't resist. I, I have to jump in and just say one of––Mervin Kelly, the longtime director at Bell Labs, who taught running Bell Labs––approached running Bell Labs as a research topic in and of itself.

One of his really simple but powerful innovations was the really long hallway to the building that forced people, as they were walking down, to do this two-minute walk down this long hallway to say hi, how are you doing? What are you working on? Any problems? What, what's frustrating you right now?

It forced people to interact in an organic, stochastic sort of way, and they learned from it. They made relations that fostered innovation.

Kevin Frazier: So architecture listeners, please come visit a campus and help us out.

But, before we pivot away to some fantastic audience questions that will not be recorded––sorry, listeners, you don't get all the juicy stuff.

Neil, we've talked a lot about the vibes being off. We've talked a lot about faulty assumptions. What's at the top of your list of things you think we’re just getting dead wrong in a lot of the AI policy discourse?

Neil Chilson: Well, so much of the focus is on––again, I mean the, the easy, the easy answer is kind of just a little bit of a repeat, which is that the AI is one thing and that there's––we can have a regime that regulates it as a single technology, whereas it's a general purpose technology that will be applied in every single field.

It will have big effects, even if they're not as big as some of the boosters say. And there will be new problems, but there's, most of them are going to be old problems. Harms––harms do not change nearly as fast as the technology that causes harms. And we have lots of legal mechanisms to deal with harms that have developed over time.

And so I think that's the number one thing I would say––I'll, I'll add some hope for the academia, academic world, because this is a moment of big disruption. AI itself is going to be very disruptive, I think, to the university experience, and to the educational experience overall. And times of disruption are times of opportunity for people who see the status quo is not working.

And so, my, my great hope is it doesn't even require internal change. The great––one of the great examples of that I've seen is that pressure from outside can cause change as well. And so when you increase that competitive market, the institutions, even long-established institutions, can change.

And so, like, one of the great examples is––and I don't know a ton about this space, I'm not Jewish and I don't have this background, but there was a movement in the, in the seventies––sixties and seventies––to have to start for people who weren't being satisfied with their experience at synagogues to have, like, home fellowship.

And it was called the Havurah movement. I think––I might be saying that wrong. But what that did is not just satisfy what those people wanted, it also demonstrated to the synagogues that there was a real interest in something different than how they had done things in the past. And a lot of the, a lot of synagogues started to shift in that direction as well, and started to offer some of those types of experiences.

And I think that could happen in the university setting as well. I think we're going to see a lot of challenges to the university model, and I think that's going to put real competitive pressure on universities to change both how they treat their and train their students, but also like, what does research look like in those, in, in those universities?

And so I think there's an opportunity for here, and some of it is being created by the pressure that AI's bringing.

Gus Hurwitz: Yeah. I'll just, again, plus-one what Neil said. This is, I think a real moment of opportunity for universities for a wide range of reasons. Innovative universities, I––this is innovator's dilemma stuff, also.

The, the big established universities, it's hard for them to do DNA-changing experimentation. They can create centers and little side things and maybe Progress Studies departments. I don't think that if, let's say, Harvard––I think that if Harvard started a Progress Studies department, I don't think it would have any substantial impact.

But if a smaller university starts a Progress Studies department––a program that grows into a department and it grows into a college, and they triple their starting small enrollment––that says something, and that becomes a model that other universities will replicate.

Kevin Frazier: Well, and I think a call to action for all of our student listeners:

When you go and apply to schools, ask them about the interdisciplinary work they're doing. Ask them how they're reevaluating their coursework and how they're evaluating students and evaluating professors.

Yes, I said it. There should be changes in the metrics we apply to professors, because that pressure too can drive change. That, I'm really excited about.

Any final words, gents?

Neil Chilson: AI is exciting. People should use it. That's the other thing I think a lot of people get wrong, is they haven't really used it that much before they start talking about it. And it's useful to––it's useful––not only is it useful, this isn't like, you know, having to use a nuclear facility.

You can download an app and use it right now. Right. Like––and so people should be doing that.

Unknown: There's a new facility powering it.

Neil Chilson: Right, exactly. You're indirectly using that, that that facility I suppose. But the barriers to try this stuff out is, are so low that everybody who's talking about it should be at least trying it.

Gus Hurwitz: And I'll, I'll end on a note of optimism. I think a lot of this discussion and a lot of the stuff I say in particular is, is a little gloomy and dour, and it's about why things don't work, and why they can't work, and why things that we might try are probably going to fail.

I'm in a moment of optimism. I, I think that we are in a moment of possibility, change––dare I say it, potential future abundance. And the framing, I think, is really important because so much just throughout this conversation, else we've talked about, the policy discourse––the policy discourse is framed in negative terms. And that sets the tone of everything.

If we look at Eugene and his keynote abundance leads, technology leads to abundance, and progress leads to liberalism, and liberalism leads to wealth and all these things, but not always.

The framing is one of the biggest drivers of the “but not always.” So I, I'm optimistic. I, I look forward to the future. And I think, for students in particular, so many of them right now don't. And that's on the universities.

Kevin Frazier: Well, there you have it administrators. Good luck with that episode.

But for now, we'll leave it there. Thanks so much for joining, Neil and Gus.

Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad-free version of this and other Lawfare podcasts by becoming a material subscriber at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and Bluesky. This podcast was edited by Noam Osband of Goat Rodeo. Our music is from ALIBI. As always, thanks for listening. 


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Gus Hurwitz is a senior fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics.
Neil Chilson is the head of AI Policy at Abundance Institute
}

Subscribe to Lawfare