Lawfare Daily: Adam Thierer on the Bipartisan House Task Force on AI’s Report
Published by The Lawfare Institute
in Cooperation With
Adam Thierer, Senior Fellow for the Technology & Innovation team at R Street, joins Kevin Frazier, Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin and a Tarbell Fellow at Lawfare, to examine a lengthy, detailed report issued by the Bipartisan House Task Force on AI. Thierer walks through his own analysis of the report and considers some counterarguments to his primary concern that the report did not adequately address the developing patchwork of state AI regulations.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Adam Thierer: Are we
going to have preemptive prophylactic types of regulations for AI that tries to
figure out everything that could happen inside the black box beforehand? Or are
we just going to say, look, we're going to see how these things play out and
we're going to sort of roll with the punches and figure out after the fact
utilizing a diverse toolkit? That very much informed this House AI task force
report, that vision.
Kevin Frazier: It's
the Lawfare Podcast. I'm Kevin Frazier, senior research fellow in the Constitutional
Studies Program at the University of Texas at Austin and a Tarbell fellow at Lawfare
joined by Adam Thierer, senior fellow for the Technology and Innovation Team at
R Street.
Adam Thierer: At the
end of the day, this paper, this report has opened the door to like open source
being a key part of the American AI story. And I think that's a huge plus.
Kevin Frazier: Today
we're talking about a lengthy detailed report issued by the Bipartisan House Task
Force on AI.
[Main Podcast]
Let's start with the basics, Adam, who was behind this monster
of a report? 273 pages, 66 key findings, 85 recommendations, who the heck
authored this and what was their mandate originally?
Adam Thierer: Yeah,
well, this report came together because the House AI Task Force was formed
earlier this year to study a wide range of issues that were of concern to
members in the House after the Senate had taken the lead.
Pretty unusual for the Senate to take the lead before the
House, but Senator Chuck Schumer and a couple of other key senators had a
bipartisan effort on the Senate side. And there were so called AI Insight Forums
and other things. And so the House kind of got overtaken by events a little
bit.
The Speaker's office and the Minority Leader's office got
together and said, okay, let's do our own thing. And so they brought together a
bipartisan team led by Representative Jay Obernolte and Representative Ted
Lieu, both of California. And they got started in February.
And there were a variety of closed door sessions. Another
unusual part about this is a series of testimonies including all the members of
the task force, and I was lucky enough to be part of one of them and many
colleagues that were parts of others. And it was a really interesting and more
detailed discussion of AI policy that followed with lawmakers sitting for, in
many cases, several hours to listen and have debates.
And it was a really interesting experience that yielded this beefy
report, as you say. I mean, it's one of the biggest reports I've seen. I've
been covering tech policy for 33 years now. This is one of the biggest
technology policy reports I've ever seen Congress issue. So that gives you a
feel for the gravity of this issue in Congress today.
Kevin Frazier: Well,
we're very thankful to you for diving into the morass that is the report. And
one point I want to additionally flesh out is why having Representative Lieu
and Representative Obernolte lead this task force was so important.
What's kind of their distinguishing factors as folks who are
authoritative voices on this policy topic that may counteract the traditional
narrative that Congress doesn't know what it's doing. It doesn't have any
expertise. What's unique about these two co-chairs?
Adam Thierer: Sure.
Well, that's a great question.
First of all, and most obviously, they are from California.
California is a pretty important state when it comes to AI and digital
technology. That's the most obvious thing. But more importantly, these are both
two skilled individuals in this department these are people that have put a lot
of serious thought into technology policy writ large.
In Mr. Obernolte's case, as the chair of this, he is the only
member of Congress with a degree in artificial intelligence. And he's also a
successful video game entrepreneur. He's one of the most interesting characters
I've ever met in Congress. You go to meet him in his office and he literally
has a collection of his favorite old arcade games from the 80s. And he’s literally
challenged me to competitions in front of some of these games that we're both
old enough to know.
Kevin Frazier: Critically, did you win?
Adam Thierer: I didn't, we didn't play. We're like, we
should talk about policy, right? We shouldn't pay Galaga all day long in your
office, Congressman. And he's like, yeah, you're probably right.
Kevin Frazier: Ah,
dang next time.
Adam Thierer: That's
a unique character right there. And so here's someone with real world business
experience as a successful entrepreneur and a degree in the topic. That's a
winning alignment of skill sets. And so he did a wonderful job. Chairing this
along with Ted Lieu. I have to say, I think they really did bring out the best
in sort of the bipartisan policy making effort that we had hoped for.
It proved that once again to me, that we can have
bipartisanship in tech policy in the United States. This was a key feature of
technology policy in the 1990s that people have sort of forgotten about in our
hyperpartisan age today. But back in the 90s, tech policy was very much a
bipartisan thing. The Clinton and Gore administration worked very closely with
the Republican Congress to bring about many of the policy successes that gave
us the internet and the digital revolution.
And I'd like to think that what Representatives Obernolte and Lieu
have done here is sort of rekindled that flame a little bit. And although
there's a lot more to chew on here, and there's a lot of, you know, ambiguous
language and aspirational statements in this report. It's still really good to
see that a lot of lawmakers can get on the same page on some key issues.
Kevin Frazier: And
flagging for our listeners, you've done extensive writing in this space and
you've paid particular attention to statements from representative Obernolte on
this topic. And you've highlighted that he, for example, has called for AI
regulations to be based on quote, the values of freedom and entrepreneurship, end
quote.
And I'm wondering if you can give us a sense, looking at the
report, knowing Representative Obernolte's general framing, how do you see that
sort of value statement appear in the report itself? And then we can get into
the substance soon. But just at a high level, how is this the sort of document
you would have expected from Representative Obernolte?
Adam Thierer: Yes,
indeed. Well, Representative Obernolte has had a lot to say about AI policy
leading up to this report and has written some really interesting things about
it, including a journal article for the Ripon Forum that he talked about the
role of Congress and regulating AI.
And that's where that quote appears that you mentioned, where
he identified the importance of having that sort of a vision of freedom and
entrepreneurialism. And then also making sure we didn't have government control
of technology or what he called the anti democratization of knowledge. And I
like that phrase.
That's very much in line, again, with the old bipartisan vision
from the Clinton/Gore days in the 1990s. And so I love that. But more
importantly Representative Obernolte has gotten into the weeds of policy. And
he's talked about sort of like the key principles that should guide the
policymaking process. And the most important of them I wrote about in a piece
that I, I called, ‘The Most Important Principle for AI regulation.’
And I highlighted how Obernolte has repeatedly said that we
have to make sure that we don't get too hyper focused on the actual process or
models behind algorithmic technologies and AI systems, and we should focus more
on outputs and outcomes, real world consequences and results of these systems.
He said, because you can get too lost trying to get obsessed about, like,
what's going on inside the black box, and sometimes you won't even be able to
figure out preemptively exactly where the harm might lie.
And so he talks about this as sort of a defining feature of
like good AI law, like making sure we focus on, you know, the actual outputs
and not the input side of things. And I think that's crucial. I really do think
is, again, as I said, in my piece, that's the most important principle. That's
what we're really fighting over.
Are we going to have preemptive prophylactic types of
regulations for AI that tries to figure out everything that could happen inside
the black box beforehand, or are we just going to say, look, we're going to see
how these things play out, and we're going to sort of roll with the punches and
figure out after the fact, utilizing a diverse toolkit? That very much informed
this House AI Task Force report, that vision.
Because thirdly, the thing that he has stressed again and again
is we need flexibility. We need incremental flexibility, agility,
incrementalism, it comes up again and again in his speeches, in his interviews,
and that has played out in this report in a really big way.
Kevin Frazier: And
before we get into the weeds, and I promise we're almost there, I just have an
aversion for pulling weeds because it was a household chore when I was growing
up. We'll get to the weeds soon.
One thing I'd love for you to flesh out in just slightly more
detail, too, would be why, from your perspective, it is so important to follow
that Gore/Clinton approach to digital innovation and innovation writ large in
comparison to what we've seen, for example, be applied generally as a
regulatory framework in the EU where innovation perhaps isn't as rampant, isn't
as evident as a result of some of the regulatory choices there.
Adam Thierer: Yes,
absolutely. Thanks for asking that question because this has been the focus of
my life's work. In fact, I've written a couple of books about this issue and
talked about this quote unquote conflict divisions, if you will, in the
transatlantic sense between the U.S. and the EU on digital technology policy
over the last 25 years.
When economists and political scientists like myself look at
the real world and try to study it, we really love when we can find so called
natural real world experiments where two jurisdictions that were maybe
similarly situated a certain period of time decide to end up utilizing
different policy paradigms to govern a new emerging technology or a new sector.
And that happened in the 1990s.
The United States went one direction under the Clinton/Gore
administration through the so called Framework for Global Electronic Commerce
and a variety of resulting laws and policies. And the European Union went in a
very different direction with very heavy handed top down data directives and a
wide variety of rulemakings that continue to this day.
And so you've got rules like the GDPR, the DMA, the DSA, and
now the EU AI Act and a whole bunch of policies and regulations in between. The
United States took a far more incremental bottom up approach. And I would argue
that these two paradigmatic approaches to tech policy had some important real
world ramifications that still have continuing lessons in the AI world.
Today in, you know, in the world 19 of the 25 largest digital
technology companies by market capitalization are U.S. headquartered companies.
Only one is in Europe. 48 of the 100 largest digital employers in the world are
U.S. headquartered companies. There's not a huge number in Europe. You can take
a look at investment flows and talent flows and so many other metrics on this.
I could go on all day about this and I've got all this
research. The situation is not good for Europe right now. They've really
hollowed out their digital technology sector. And whenever I'm in front of
students or audiences, I often challenge them with a simple test, like name me
a single leading digital technology innovator headquartered in the European
Union today.
And the best they usually got for me is Spotify. I said, well,
I'll give you that one. But you know, and there are a few others. I admit there
are a few others, but there's not many. And the fact that it's so hard for us
to name those leaders and the fact there's such a lack of them in the digital
technology space, it has to tell us something about policy.
And I believe it tells us that, again, the Clinton/Gore vision
worked, that this sort of more bottom up incremental, you know, take it as it
comes kind of approach made a great deal of sense as opposed to the more top
down precautionary principle based approach of the EU that attempted to solve
every problem before it even happened.
And I fear that's the direction, you know, we're having the
debate right now in AI policy in the United States at the state level, as we'll
get to in a moment. But the fact of the matter is that the United States
steered a different direction and it had positive consequences for innovation
and for speech.
I want to be clear. It wasn't just about commerce. It's about
speech too. We have more speech platforms and speech opportunities than any
nation in the world because we got policy right.
Kevin Frazier: And as
you and I have talked about, we've seen that the incoming Trump administration
seems to be leaning into this approach of putting innovation at the forefront
of AI policy.
There's a long discourse we could have about the extent to
which the Biden administration appeared to be more focused on some of these
ethical concerns or concerns about discrimination and an urge to want to get
into some of those internal processes you were talking about earlier.
So now turning to the actual report and having that sort of
background framework in mind and emphasis on innovation, we've got 66 key
findings, 85 recommendations. We're not gonna march through all of those as
much as I, I think listeners would love to know the ins and outs of this full
report. I'd love for your kind of expert take on which of those findings, which
of those recommendations really stand out to you as warranting additional
attention.
Adam Thierer: Yeah,
I'd be happy to, but let me start, Kevin, by taking just a little bit of a step
back because I want to remind people where we stood in the AI policy debate in
this Congress. Just even 18 months ago, at the beginning of the 118th Congress,
we started a pretty vigorous debate about AI policy following the launch of
ChatGPT and interest was intense.
And I would argue there was a lot of hysterics. There were
people who really kind of lost their minds at first about this stuff. And I'll
just take you back to May 16th of 2023. Senate Judiciary Committee held a
hearing as Sam Altman at it, Gary Marcus, and a representative from IBM. It's
the Senate Judiciary Committee.
In this committee meeting, which happened at the same time that
the Pause AI letter was gaining many signatories, you may recall saying we
should just stop AI period, full stop, whatever that means, it was never fully
explained, but there were a lot of people signing onto it, right? Including
Elon Musk.
Kevin Frazier: The
basis was this concern for listeners who perhaps had hazy memories from May
2023, the overriding concern of the folks signing onto that letter was
existential risks or catastrophic risks posed by AI, which have broadly
informed this sort of AI safety movement which we've talked about previously on
the pod. But just for folks who hadn't read the letter recently, but Adam,
please continue.
Adam Thierer: That's
exactly right. So it was very much dominated by that concern, sort of like
Terminator-esque concerns.
And so, unsurprisingly at this May 2023 Senate Judiciary
hearing, we heard references to AI as a new atom bomb, and one senator famously
said that when we think about AI, we should just start from the premise that it
wants to kill us. And then there were proposals put on the table for
comprehensive AI licensing, just broad licensing; didn't even specify like by
sector or anything else.
It just said broad based general purpose AI licensing by a new
regulatory body that would be resembling a so called FDA for algorithms,
whatever that is. It got worse from there, where we went into a discussion of
like how we might bring it all under global control from some new global
regulatory body. Or at least America would, you know, somehow become compliant
with everything the Europeans wanted.
I mean, it went on from there. And it was just an astonishing
thing to witness. And I wrote about it that day, and I said, you know, it's
hard for me to believe this conversation can get much worse from here. The only
place we had to go that was left was full blown nationalization of all
algorithmic systems and high powered computing. And believe it or not, there
were some people that were putting on that table, that idea on the table in
academic circles. But let's flash forward now, 19 months.
Kevin Frazier: Well,
and just to pause there before the 19 months. For folks to hear your take on
the AI safety debate, the folks who were urging senators and encouraging
senators to consider those risks were saying, yes, perhaps we agree that the
worst case scenarios we're hypothesizing may have very low odds of occurrence.
But because those risks could end humanity, that's why they weren't particular
attention. What's your pushback there in terms of just that epistemic approach
to understanding policy?
Adam Thierer: Sure.
Yeah. Well, we got way out ahead of ourselves with hypothetical Chicken Little
scenarios about like how, you know, the AI was going to result in the
Terminator and come and kill us all.
And it resulted in California pushing for a major comprehensive
law, SB 1047, that resulted in one of the most historic technology policy
debates in my lifetime. In fact, I don't think I've seen in, except I'd have to
go back to the crypto wars of the nineties, I don't think I've ever seen such
an interesting, strange bedfellows collection of people for and against SB1047.
It was really, really intense. And California passed that law.
Which would have had a pretty interesting restrictive control regime for so
called frontier AI models as measured by a certain computational power, and
then even a new regulatory bureaucracy to oversee it. But then Governor Newsom
vetoed that bill in California.
And so the debate that was started at the federal level kind of
shifted to the state level, but really played out in Sacramento. And, you know,
again, many of us, including me, pushed back aggressively and said, like this,
we are getting way ahead of ourselves. These are hypothetical concerns. They
have no basis in reality.
People have been watching, you know, too much sci-fi and Black Mirror
episodes, you know, in the evenings. And we need to step back and have a more
cautious approach to this, especially because we pointed out how devastating
that would be to certain types of algorithmic innovation, not just for smaller
players and others, but more specifically from open source players.
And so that was an incredibly intense debate, but one that's
now been kind of paused a little bit after Newsom's veto. And we could get into
this, but this has shifted the dynamic over the so called X risk, existential
risk debate back to the federal level. And the so called AI Safety Institutes
that are being formed across the globe, including here in the U.S., and efforts
to have different approaches to it at the federal level besides flat bans or
flat restrictions.
But it could play out in many, many ways, including some ways
that aren't really specified in this new House AI Task Force, which doesn't say
a lot about some of the export control debates and some of the other things
about cap, capping computational power or something. So there's a lot left
ambiguous about like where this task force comes down on AI safety. And so
that's one side of the debate.
And then there's different types of debates like AI
discrimination or ethics and bias. That's another bucket. And then there's sort
of sectoral battles or targeted, you know, concerns that we could talk about as
well, but those are the major sort of like political, you know, demarcations or
buckets that I use to describe the way AI policy is playing out in the United
States right now.
Kevin Frazier:
Looking at the report itself, you mentioned that even at its incredible length,
it doesn't touch on everything. Of its recommendations and findings, what were
your big takeaways from its focus? What should we glean? What might it be
suggesting about Congress's focus come 2025?
Adam Thierer: Yeah,
absolutely. So first of all, let's talk about tone and general recommendations.
It gets the tone is balanced. The tone is sober. It is not the tone that we
heard from that. May 2023 Judiciary Committee hearing that I just discussed.
It's very different and that is important.
Words matter. Entrepreneurs in the market get signals from the
way that policy makers talk and it also spooks investors as well. And so we
have a very much more responsible approach here. I'd argue the adults have
entered the conversation. And they've come up with a more balanced approach.
They've also come up with an approach that repeatedly stresses
flexibility. The term flexible or flexibility appears over 20 times in this
report. And terms like agile, incremental, things like that, they appear
multiple times as well. And so that is what I'll call a return to normalcy.
Like this is the way American technology policy for digital issues has worked
generally for the last 25 years. Incremental, agile, flexible, bottom up, a lot
of stress on things like best practices, multi stakeholderism, you know,
collaborative efforts.
But it's really this sort of more rolling with the punches kind
of approach to policy that I will admit freely is very, very messy because all
decentralization is messy. And that frustrates people. People love more broad,
you know, silver bullet solutions up front, like, how do we solve this problem?
And this report very maturely admits and even uses the term
humility when talking about it that we don't have all the answers. And
Obernolte said, leading up to the report, I'll just quote it, that this is not
going to be one 3,000 page AI bill like the European Union passed last year,
and then we're all done with it. He said, instead, he says, you know, America
will need to have a more flexible and responsive approach.
And that is what this house AI task force is recommended. A
more incremental flexible approach. So that at the highest level is in and of
itself an important policy approach. It's not a specific recommendation, but it
informs all the recommendations you see from there on out, which very much have
a sort of on one hand, on the other kind of approach.
It doesn't have this bright line like, and we therefore decree
that this is how we will solve all issues involving AI and copyright, or AI and
national security, or whatever. It says on the one hand, there's this issue. On
the other hand, there's this. But generally speaking, comes out more in the
direction of freedom, flexibility, you know, pro-innovation kind of approaches.
So that's important.
Kevin Frazier: Yeah,
and what's really telling about this report in comparison to the Senate's Bipartisan
AI Working Group and their policy roadmap. That policy roadmap was about eight
pages of what could have been a Harvard Kennedy School memo, and I can say that
I've been to policy school where there just wasn't a ton of substance, right?
We weren't even able to really detect what the general tone
would be, what the signal would be from the Senate working group. Here though
now, I think with this bipartisan group of folks who are well versed in AI, who
have a stake in AI's continued exploration and research and development, this
really does seem to me to send a signal. And so looking at those specific
recommendations, are there any that stand out to you that you think are going
to be particularly top of mind for Congress going forward?
Adam Thierer: Yeah,
well, first of all, at the highest level, it stresses the need to treat
different types of AI and different types of sectors differently, and that we
should take more of a sectoral risk-based approach. And a lot of people use
those phrases, but this report actually puts some meat on the bones of it.
It's easy to say risk based approach, but it actually says
right up front in the preface that the key principle would be identifying AI
issue novelty. And really specifying, like, is this a truly new AI capability
that we've never been able to ever seen before and never could never handle
using existing law? Or is this really just an existing issue that the nature of
it has changed because AI entered the picture? And you know, you look at those
two buckets, you say, well, what were they talking about?
Well, on the first thing, like something that's AI’s totally
different, really quite new. I mean, you know, nonconsensual deep fakes
utilizing AI is something that has really become something policymakers are
concerned about. And I totally get it. There's some new capabilities there that
transcend old laws and capabilities that probably are going to require some
novel approaches. And we're starting to see them. And that makes a certain
amount of sense.
On the other hand, what's an existing issue that has been
changed significantly by AI? Well, how about AI ML in medical devices? Or AI in
like autonomous systems like driverless cars or drones. I mean, you know, a
driverless car is in one sense a computer on wheels and that's new, but at the
same time, it's still something on wheels. It's a car. It's something we've
known about and regulated for a hundred years.
And the FDA has been off and running doing computerized
medicine or digital health for now the better part of 15 to 20 years. It's not
all that new. And this report finally gets serious about breaking things down
to smaller components.
And this has been my primary recommendation and all my
testimonies to Congress, all the papers I've written. The only way we're going
to get anywhere on AI policy and governance as a nation is if we break it down
into smaller components. And we take a building block approach of like, okay,
how can we utilize these existing standards, laws, regulations, court-based
systems, and then how do we supplement them as needed.
This report really finally starts to think about that in a
serious way. Doesn't answer all the questions, admits that it doesn't know how
in some cases, and in other cases just dodges a really sticky wicket and just
says we're going to pass on that one. It's very clear where they did that in
some chapters that we'll get to in a second, but I think at a high level that's
a great framing.
I think this is what we've always needed. You know, we've
eschewed the idea of like, you know, oh, here's the simple solution. Here's the
silver bullet. That just doesn't exist. And, you know, the people that were
coming up with really far reaching and I would argue radical solutions out of
the gates, again, going back to that old hearing, you know, sweeping broad
based, you know, let's license AI.
You know, what does that mean? What does that mean? They never
unpacked it for us. This report is willing to sweat the details. These people
behind it and the staff who put enormous hours into it. I mean, they actually
spent some time thinking through those, you know, devilish details, and that's
good. We've made progress here in that sense.
Kevin Frazier: I
think a lot of listeners and especially the folks in the AI world who may be
checking out this report are going to do a control F quickly for open source.
So what are some of the takeaways for you from the report with respect to open
source? We've seen that this is one of the hottest topics in AI debate.
In part because the folks who are on the AI safety community
side would say things like, hey, perhaps open source models as a result of
facilitating the spread of models that we don't know their full capacity, what
they may be capable of. That's a really grave concern, especially from a
national security perspective.
On the other side, we may see folks who are leaning into
innovation and democratization of this technology and saying, open source is
the way to go. And that's the way we make sure these benefits are realized
across the U.S. So this remains one of those sticky wicket issues you pointed
out. Does the report give us some clarity on how representatives Lieu and
Obernolte are thinking about this?
Adam Thierer: Well,
it does say some important things and you're absolutely right, Kevin. This has
become a really interesting part of the broader AI policy wars. And it, it
again has created interesting sort of strange bedfellow coalitions.
And in a recent piece that I wrote about the Trump
administration and their AI approach. I pointed out that one of the tensions in
the coming Trump administration may be between those conservatives who take a
very old fashioned approach to open source being a potential national security
vulnerability or threat versus the newer MAGA conservatives who think of it as
a great way to inject more competition and choice into a world dominated by
what they regard as the evils of big tech.
And that is a really interesting development. Because in the
past, when I spent 10 years working at Heritage in the 1990s and ran their
first technology, digital technology program. Back then I supported open
source, but I didn't have a lot of friends who did in the institution. But now
Heritage and other conservators are vociferously behind, like, supporting open
source.
And so where does this report come out? Well, the report says,
very interestingly enough, that Open AI models encourage innovation and
competition, and there is currently limited evidence that open models should be
restricted. It goes into a little bit more detail about that and has some
generic recommendations about we should continue to study its vulnerabilities
and specifically how it could create, you know, dangerous capabilities, whether
they be chemical, biological, radiological, nuclear, and so on and so forth.
But at the end of the day, this paper this report has opened
the door to like open source being a key part of the American AI story. And I
think that's a huge plus. The Biden administration, in my opinion, is pretty
good on this as well. The Biden NTIA within the Department of Commerce issued a
major report on this issue that I thought was excellent and really seriously
evaluated, you know, the so called marginal risk question of what's going on
with open model weight systems and said, like, look, you know, it depends is
the answer.
And it's really hard, but we have to be careful about casting
too wide of a net for open systems or else you'll crush all of these beneficial
forms of innovation and competition. So this tension is going to continue to
play out. But the important point is that this report did not only did it not
foreclose open systems, it kept the door wide open and kind of encouraged them.
And I'm extraordinarily pleased with that result.
Kevin Frazier: So
we're talking just 24 hours after the release of the report. So, number one,
thank you for completing a very difficult assignment. I'm a mean professor for
assigning 273 pages of reading in one night.
Number two, I think that leaves us with a bit of speculation we
have to engage in, in terms of identifying, what are the aspects of the report
that are going to be kind of top of mind for the public and for policymakers
going forward? You've mentioned there are a couple key chapters that you think
stand out. What are some of those that you think warrant specific diving into?
Adam Thierer:
Absolutely. So there's a couple of areas here where I think there's a lot of
consensus, and I think they're going to be priority issues in 2025, both in
Congress and in the Trump administration. The first on that list, in my
opinion, would be energy uses in data centers, which is a very interesting
chapter in the report.
And they do something in this report that's really, really
crucial in my opinion. They make it very clear that there is a symbiotic
relationship between the success of AI systems and the ongoing success of our
ability to diversify our energy portfolio and grow our energy, you know,
sources as a nation.
Then secondly, they link that to geopolitical strength and
security. And the report says, direct quote, AI is critical to U.S. economic
interests and national security and maintaining a sufficiently robust power
grid is a necessity. And then says again in another bullet point, continued U.S.
innovation in AI requires innovations in the energy sector.
So these things are now going to go hand in hand. We're going
to be having conversations about energy and AI policy at the same time. And in
my recent report that I mentioned about, you know, AI and the Trump
administration, I think this is a top line winner for the Trump team. They've
been talking about energy diversification and the importance of making sure
that basically we don't become Europe and become subservient on other nations
for our power. And, you know, we've got to make sure we can continue to power
all our important, you know, new sectors and innovations, especially AI and
data centers.
So this is going to move forward. Now, there's going to be a
little bit of attention, I think, on the Democratic side about, you know, what
kind of power generation are we talking about? Is it too much power
consumption? What are the, what's the climate footprint look like? So on and so
forth. But the report basically also says, look. I mean, AI can be part of that
solution as well. It can help us diversify and, you know, clean up our system
and find alternative sources.
So I think that's going to be a compelling narrative in 2025. It's
essentially the linkage of two technological revolutions. The revolution in
alternative and new energy sources, and hopefully a renaissance rebirth for
nuclear power in particular, but then also the continuing computational. So
that's one area I think, a lot of agreement.
Kevin Frazier: I'm
not a betting man, although I'm going to ask you to make some predictions at
the end of this pod. So hang on to your hat for that one.
But thinking about issues where we can see the stars align or
strange bedfellows develop, this does seem like the top of mind for me, when
you think through democrats in rural states, for example, this is a huge win if
you can bring home an energy project while also talking about innovation,
national security, defense, right.
Conservatives likewise can pay maybe less attention, hey, this
is known as a sustainable resource or what have you, solar, wind, hydro, but
instead emphasizing that innovation. So I do think this is maybe one of those
areas where we can see some consensus emerge. So I'm keen to keep the optimism
going. What's another area we're seeing?
Adam Thierer: Yeah, you're
absolutely right about that though. And another thing I should have mentioned
is that also plays into the discussion in this report about technology R&D.
And there's a little bit of an industrial policy component to
this that I won't get into too much, but the bottom line is that, you know,
that's a love fest. Everybody loves spending money in their districts and like,
you know, spending more money on new projects, whether it be energy projects,
data centers, whatever else. So I think that's going to be something we're
going to see action on.
The other thing I'll mention here, just very briefly, there's a
couple of key sectors that you hear lawmakers when they're having AI hearings
and other events and doing public speeches about these issues. They always come
back to a couple of key sectors.
One of them is healthcare and the other is agriculture. And
that makes sense because first of all, in agriculture, everybody, almost every
lawmaker has a soft spot in their heart for the old agrarian lifestyle and
farmers and everything else. And they're very, very excited for what AI and
robotics and computational technologies might mean for, like, improving, you
know, the farming system, the agricultural system.
I won't go into the details about that, but there's a pretty
substantial chapter in this report about agriculture and about how, quote, AI
driven precision agriculture could enhance farm productivity and natural
resource management.
Kevin Frazier: I just
Jefferson is rolling over his grave thinking about where did all of my human
farmers go? They got replaced by AI. This is a problem, but I'll leave
discussion of the Federalist Papers and anti-federalism.
Adam Thierer: Just as
an aside, I come from a family of farmers in Midwestern Illinois and, you know,
when I go back to see the family farm these days, I see uncles and friends out
on tractors that are completely robotic. You know, computerized systems with
their GPS linked and that are sort of, sort of plowing the fields for them, as
they sort of sit back listening to satellite radio and an air conditioned cab
of their tractors drinking a beer. It's like the world has already changed.
This revolution is upon us.
Kevin Frazier: This
is going to change country music tremendously. All references to horses are gone.
Adam Thierer: Right,
right. So that's going to be an area that the question really only there is
like, how does that manifest itself? What is Congress going to do? You know,
I'm not sure what it means in terms of policy, except for like, okay. encourage
more, you know, AI and robotics on farms. Maybe they spend some more money on
it, I'm not sure.
But healthcare is the really big one. And on healthcare, you
hear a lot of lawmakers talk about what, here's the first bullet point from the
report on it. Quote, AI's use in healthcare can potentially reduce
administrative burdens and speed up drug development and critical diagnoses.
This is a crucial thing. And you know, they all, you know, pretty much what
every congressman and woman holds, they really all want to serve forever, a
little longer life.
And like, if AI can make them serve that longer life and serve
long they love it. They love it. But they also love it because quite
practically speaking, it's an important way to potentially not just reduce
administrative burdens, but administrative costs associated with a healthcare
system that is, you know, really out of control in terms of those costs.
And there's already been a lot of literature on this and a lot
of it cited in the report. And I think Congress is very eager and trying to
figure out how to use more pilot programs or like injecting these sorts of
experiments within the Medicare/Medicaid system as well as, you know, insurance
more broadly. So I think that's a common theme that everybody agrees with. I
mean, there's a lot of love there.
It's a little bit more controversial when you get to some other
sectors. There is a major section on FinTech in this report. There's some
recommendations in it, but it's a little bit more generic. And I think at the
margin there, they were, you know, trying to be careful about what they said
because I think that does divide some members more than ag and healthcare does
in terms of AI's role.
Kevin Frazier: Before
we, we have to unfortunately draw this fascinating conversation to a close, I'm
keen to hear more about one of the blind spots you identified in the report. So
you said in your write up, you know, this is a great example of returning to an
emphasis on agile, flexible, incremental regulation.
But you flagged one big issue, and that was, in your opinion,
insufficient attention to the fact that we have this patchwork of state laws
developing that could really hinder AI innovation, in your opinion. If I were
to phone a friend right now, I'd call David Rubenstein, and he'd say, Adam just
doesn't get it, we need AI federalism.
This idea he's coined of making sure that we should lean into
states and celebrate states as a sort of laboratory of democracy approach. They
can test ideas, we can identify best approaches, and then Congress should take
action. You seem to be of a different mindset, and to the extent I represented
David accurately, how would you counter that idea that now's the time for AI
federalism?
Adam Thierer: Well,
just in response to the ghost of David that haunts the skull, I'll just say
this, that the first of my dreadfully boring ten books was on federalism and
interstate commerce in 1998. And I really openly struggled with the questions
about how it's applicable to various types of emerging technologies.
Because it's legitimately hard to know in some cases what we
even mean by interstate commerce and the world of digital bits and algorithms,
right? You know, generally speaking, these things don't stop at state borders,
and they really shouldn't, and I want to see speech and commerce flow as freely
as possible.
I mean, that's what makes America really great. And really, in
the internet revolution, we proved it. We had a national framework. In fact,
the Clinton Gore document was called the Framework for Global Electronic
Commerce. And there was an acknowledgement at that time in the
Telecommunications Act of 96, in Section 230, in the Internet Tax Freedom Act,
and a whole variety of other things, that we needed a national vision for this
new global medium and resource. And we got it.
But what's changed in the subsequent 25 plus years now is that
the states have very aggressively asserted themselves. And not only just
asserted themselves, but actually got out in front on digital technology
policymaking in the United States. And they have led with two arguments.
One is that, well, Congress has just become a dysfunctional
mess and they're not doing their jobs anymore, so we'll do it. What everyone
wants to think about that's a leading argument by many of the state officials
pushing for state AI bills. The second argument that they make is basically
like, you know, well, we already have all these existing, you know,
capabilities and laws and, you know, why can't we apply them to AI? And that's
a more legitimate argument in my view.
But the problem is that the way they're applying those laws and
what they're saying is that we should apply them more preemptively and
prophylactically. That we should basically take and adopt a mini EU AI act
model state by state and have a bunch of sort of preemptive rules that say you
have to run your algorithm through some sort of a screening process or, you
know, the so called algorithmic impact assessments and new bureaucracy will be
set up.
And this is what I've argued in my work, sort of guilty until
proven innocent model of digital policy. It's sort of like prior restraint for
digital bits and algorithms until you get the blessing of some authority to
move forward. That's the European model. I don't like it. And I think it's
really problematic in particular for small businesses, which in the report is a
major focus that we didn't discuss.
But basically, the report goes out of its way again in a very
bipartisan way to say we need more quote unquote little tech competition to
fend off big tech and like too much concentration. Well, you sure as hell ain't
going to get it if you've got a patchwork of 700 plus laws, as we have today,
moving in the states that are contradictory and highly costly with compliance
costs that only the largest technology companies and their legal shops can
afford to deal with.
And so we have to talk seriously about the two sides of
federalism. And yes, I get it, states rights is important and they should have
some flexibility. But not unlimited flexibility. Congress needs to establish,
you know, assert its role and establish the fact that this is a national
marketplace. And that we do have legitimate interstate commerce and speech
concerns here that transcends state borders.
They at least need to play that role and put a little bit of
fear of God in the hearts of the states to say, don't overstep your bounds as
you do these things. But I'd like to see them go further and actually start to
do some serious preemption, which is what I testified in front of this House AI
Task Force on.
But I'll tell you this, as I wrote last night about the report,
I got a very icy reception. There's just not much of an appetite for preempting
in Congress these days. In fact, of the over 100 AI bills that were pending in
the 118th Congress, and I read pretty much all of them, I think, I didn't see
preemption mentioned in one of them.
And when I talked that day to lawmakers directly about, like,
why we needed this, and we needed it to be more like ceiling preemption as
opposed to floor preemption, I didn't get many takers. I got Representative
Obernolte. He's made this a priority from day one. He wants some sort of
preemption. And even Representative Lieu, the Democratic, you know, co-chair,
he kind of gets this. But not many other people were interested and there's no
laws proposing.
And so we're about to witness 2025 is going to be the year that
I predicted that we're going to see the mother of all technocratic regulatory
patchworks in the United States. And I don't see Congress doing much to stop
it, and the report has very little to say. And it's mushy, mealy mouthed stuff
about like, well, we should just look into this more. Literally, it recommends
a study.
The lazy out in Congress, when you know they've reached a point
of, you know, an impasse where they just can't go in for this, like, let's do a
study on this issue. That's chickening out is another way to put it.
Kevin Frazier: You
can always study more things. Well, before we sadly have to let you go, you've
made one prediction.
Now I have to ask for one more. Speaking of icy receptions,
your Indiana Hoosiers are going to be traveling to South Bend, Indiana for the
first round of the college football playoff. For those who don't follow Adam
and I on X, we've exchanged a few tweets, or whatever we call them now, about
my Oregon Ducks, the greatest team known to man, and Adam's Hoosiers.
And so Adam, how are you feeling about the Hoosiers taking on
the Fighting Irish? Do they have any chance?
Adam Thierer: I'm
feeling great about it, except that Notre Dame's got God on their side, and I'm
not sure the Indiana Hoosiers do, but I'll, I'm still rooting for them hard. I
think they've got a great chance, but after that, Georgia awaits.
I'm pretty scared about that day, so we'll see. So if Georgia
and Oregon get matched up at the end of this, I'll bet you on that. I'll take
Georgia over your Ducks.
Kevin Frazier: Oh,
geez. Okay. You've heard it here, folks. With that, we'll have to let you go.
Thanks again for coming on, Adam.
Adam Thierer: Thanks
so much for having me. I enjoyed it.
Kevin Frazier: The Lawfare
Podcast is produced in cooperation with the Brookings Institution. You can
get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare
material supporter through our website, lawfaremedia.org/support. You'll also
get access to special events and other content available only to our
supporters.
Please rate and review us wherever you get your podcasts. Look
out for our other podcasts including Rational Security, Chatter, Allies, and
the Aftermath, our latest Lawfare Presents podcast series on the
government's response to January 6th. Check out our written work at
lawfaremedia.org. The podcast is edited by Jen Patja. Our theme song is from
Alibi Music. As always, thank you for listening.
