Scaling Laws: How AI Is, Will, and May Alter the Nature of Work and Economic Growth with Anton Korinek, Nathan Goldschlag, and Bharat Chandar
Published by The Lawfare Institute
in Cooperation With
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chandar, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs.
They discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.
Follow them to find their latest works.
- Anton: @akorinek on X
- Nathan: @ngoldschlag and @InnovateEconomy on X
- Bharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar
Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.
This Scaling Laws episode ran as the November 14 Lawfare Daily episode.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Alan Rozenshtein: It
is the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law
at the University of Minnesota and a senior editor and research director at Lawfare.
Today we're bringing you something a little different: an
episode from our new podcast series, Scaling Laws. It's a creation of Lawfare
and the University of Texas School of Law, where we're tackling the most
important AI and policy questions, from new legislation on Capitol Hill to the
latest breakthroughs that are happening in the labs. We cut through the hype to
get you up to speed on the rules, standards, and ideas shaping the future of
this pivotal technology.
If you enjoy this episode, you can find and subscribe to Scaling
Laws wherever you get your podcasts and follow us on X and Bluesky. Thanks
for listening.
When the AI overlords takeover, what are you most excited
about?
Kevin Frazier: It's
not crazy. It's just smart.
Alan Rozenshtein: And
just this year, in the first six months, there have been something like a
thousand laws.
Kevin Frazier: Who's
actually building the scaffolding around how it's going to work, how everyday
folks are going to use it? AI only works if society lets it work.
There are so many questions that have to be figured out and
nobody came to my bonus class. Let's enforce the rules of the road.
[Main episode]
Kevin Frazier:
Welcome back to Scaling Laws, the podcast brought to you by Lawfare
and the University of Texas School of Law that explores the intersection of AI,
policy, and, of course, the law. I'm Kevin Frazier, the AI Innovation and Law
Fellow at Texas Law and a senior editor at Lawfare, joined by a trio of
economists.
First, there's Anton Korinek, a professor of economics at the
University of Virginia and newly appointed economist to Anthropic’s Economic
Advisory Council. Second, there's Nathan Goldschlag, director of research at
the Economic Innovation Group. And third and finally, there's Bharat Chandar,
economist at Stanford Digital Economy Lab.
Today we're tackling the question that's dominating headlines
and earnings calls: how is AI disrupting the workforce?
As companies like Amazon announced staggering layoffs, it's
unsurprising that more and more Americans are wondering when they may find
themselves on the wrong end of a CEO's effort to adjust the company to the age
of AI.
This trio of experts is well suited to take on this important
topic, and with that, giddy up for a great show.
So everyone knows when they pull up the latest edition of the
Wall Street Journal, or the New York Times, or really basically any newspaper
these days, there's seemingly some headline about how AI is causing some degree
of economic turmoil, whether it's job displacement, whether it's the quote
unquote AI bubble, or whether it's just the future of the economy itself. And
that's why I'm so jazzed to have three incredible economists with us today on Scaling
Laws.
Anton, Bharat, and Nathan, thank you so much for joining.
Anton Korinek: Great
to be here with you.
Bharat Chandar: Thank
you.
Nathan Goldschlag:
Happy to be here.
Kevin Frazier: Bharat,
let's start with you. In terms of economic consensus, what is the sort of status
quo among economists, which is a lot like saying, you know, tell me about how
frequently lawyers agree on the interpretation of the Constitution, but for our
audience, can you kind of outline what points of consensus and consensus exists
among economists?
Are we seeing a sort of doomer camp of economists,
accelerationist economists? What does it look like if we were to draw a map of
how economists are thinking about AI?
Bharat Chandar:
That's a great question. There's a couple angles to that. There's the
productivity angle, I think, and then there's the labor angle.
So I guess I'll start with the productivity angle, because I
think that informs how we think about some of the labor market impacts. On the
productivity side, I would say that the median economist is probably more
skeptical of productivity impacts than the median technologist as you might
expect.
Now, that said, there are definitely economists kind of across
the spectrum and, you know, wonderful people like Anton are kind of modeling
the potential impacts of transformative AI and how that might reshape the
economy, both on the productivity side, on the safety side, on the labor side,
et cetera.
On the labor side of that equation, I think we can both talk
about future impacts and current impacts. My view of what we're seeing in terms
of current impacts is that overall––and this is suggested by a few studies
including by Nathan, myself, this team at the Yale Budget Lab––that overall so
far we probably have not seen major impacts of AI on the labor market.
But that said, there are certain segments of the population
where we might already be seeing notable impact. And in particular that's for
young workers in jobs that are more exposed to AI, such as software
development, customer service, things like that.
Eric Brynjolfsson, Ruyu Chen, and I, my colleagues at Stanford
wrote a paper called “Canaries in the Coal Mine,” where we documented that
employment in these occupations has been declining over the past couple years,
and we evaluate some alternatives such as interest rate changes, return from
work from home, et cetera.
But there seems to be kind of a robust relationship between
these employment changes for entry-level workers and AI exposure.
Kevin Frazier: So it
sounds like the question really is about magnitude of these changes and the
timing of these changes.
It's no secret that when we see new technology come about,
there's always going to be some degree of changes in what professions are
rewarded in what way, and how the market has a different demand and supply of
those jobs. I wonder, Nathan, for you, when you see headlines like we did later
in October of this year, 2025, about Amazon announcing 14,000 layoffs and
perhaps as many as 30,000 in the near future, does your mind immediately go to,
it's the AI and it's only the AI, we've gotta blame AI for everything?
Or ,what would a economist analysis of that headline look like?
What are we missing when we don't get the full interpretation of these sorts of
massive disruptions to the labor market?
Nathan Goldschlag: I
think it's sort of three things.
The first one is that a company that announces that they're
reducing their labor force because of their integration of a new transformative
technology sounds a lot better than ‘we hired too many people and we need to
contract,’ or, you know, ‘the sales aren't going as well as we thought, we need
to contract.’
One of those stories sounds a little better to your
shareholders. That's one thing.
Second is that, you know, it's, it might be tens of thousands
of employees maybe at one firm, but you’ve got to keep in mind that gross flows
in the labor market are enormous, right? So there's millions of jobs that are
created and destroyed every quarter, right?
So these, you know, these sort of like drop in a bucket that
hits a headline because yeah, the firm dis, you know, firm size distribution is
quite skewed, but it's still the case that those sorts of numbers are not going
to be moving the needle on the economy. And then the third thing is that in the
back of my mind is just as Bharat has said, is that we've, what sort of looked
at this question, right?
I've done it. Bharat has done and his team at Stanford, the
Yale Budget Lab, like he mentioned. And when you look at these, you know, for
evidence of, you know, displacement effects for the economy overall, we just
aren't seeing it yet.
So I am open to the idea that there's going to be really
concentrated pockets of job displacement, and it may actually happen within
certain types of firms, but I don't know that we have the evidence or the
foundation yet to make that claim.
Kevin Frazier: So
everyone loves to say that AI has a jagged frontier, right? It's everyone's
go-to of saying, yes, we will see some displacement in some professions sooner
than others, and that's unsurprising, right? When we saw the introduction of
the Model T, for example, perhaps the horse pooper-scooper lost their job first
rather than X, Y, and Z, you know, horse mechanic or horse keeper.
Man, I am really showing my lack of horse knowledge today, but
perhaps the horse pooper scooper was the first to go. Other professions, how
they're impacted when and to what extent, is a lot harder to predict.
Anton, I wonder, from your vantage point, how would you
characterize the current understanding of AI and its likelihood of causing both
short-term and long-term changes to the economy?
Anton Korinek: So the
current consensus among economists is that AI is definitely a general-purpose
technology, but not necessarily something transformative on the scale of the
Industrial Revolution yet. So the current AI systems, they can increase our
productivity. Maybe they're going to deliver something like the growth effects
of the internet, boom, but they are not going to do something.
That is fundamentally different from that, which is what, for
example, lab leaders would be predicting about the economic impact of AI. And
you know, the way I view it is, as of right now, the main mode that work
interact with AI is the chatbot format.
In a chatbot format, you have turn-taking, you need a human to
enter prompts, then the AI can give a response. Then you have the human respond
to that again. And that is kind of by design, a technology that is
complimentary to work. So we have all these different cores for which jobs are
affected by AI, but as long as we use the chatbot to interact with it, it is.
Affected in a complementary fashion. Now, the big change that
is currently playing out is that we are moving from chatbot interactions to AI
agents, and I personally believe that change is going to also fundamentally
change the nature in which AI will affect labor markets.
So the more these autonomous agents roll out, the more AI will
become a technology that actually displaces work rather than just complimenting
it.
Kevin Frazier: So
this really gets at the core concept of the difference between augmenting work
and automating work, and the fact that with these AI agents, the presumption or
one of the common definitions is an AI agent that can do any task you can do on
a computer.
Which for a lot of jobs, if it can perform the email function,
the research function, the memo analysis, all of these functions––well,
suddenly, for a wide swath of jobs, we are left asking, as you're pointing out,
Anton, this isn't just augmentation, this is complete wholesale adoption of the
key tasks of that job.
And so Bharat, I want to come back to you for a second, because
you flagged that perhaps the tech workers are a little bit more bullish about
how soon this automation may occur. Can you give a little bit more detail about
why you may be a little skeptical of claims by folks like, I believe it was
Dario Amodei forecasting, just, wild end of white collar jobs by 2030, and
other folks really warning about the fact that early-stage careers, maybe by
the wayside within a matter of years.
What's your own sense of the rate of technology adoption and
diffusion right now with respect to that critical question of getting to
automation rather than just augmentation?
Bharat Chandar: I
think it helps to start by thinking through this historically.
So technologies in the past, they've destroyed work. And at the
same time that they've done that, they've created new forms of work, and
they've created new labor demand for existing work so that today the
unemployment rate is under 5%.
That's pretty low by historical standards, despite all the
technological change that we've seen over the past. Couple hundred years that
replaced, you know, for example the horse scooper that you were talking about
before.
So even as these technologies displace labor, they also create
new forms of labor demand and create new forms of work that allow people to
continue to find employment in the labor market.
And I think where the uncertainty lies is AI’s fundamentally
different in terms of tech compared to prior technologies and I think one way
that you could think it might be different is that as, because the models are
improving very quickly and because they're becoming more agentic, like Anton
was saying, you could imagine a world in which the new work that's created and
the new labor demand that's created is also being done by AI in the future.
And I think that is one way that you could think it might be
fundamentally different from prior technologies. Now there's a lot of
uncertainty about when or if that might happen. And I think a lot of economists
would disagree on that possibility.
So I think we're in the stage where we need to do more research
to understand how it might impact the labor market going forward, how the model
capabilities are improving, and which dimensions that they're improving in.
And is there a possibility that we might be able to even direct
the direction the, you know, directs the nature of the technological progress
in this technology, so that it becomes more augmentative versus more automated
in nature?
Kevin Frazier: Yeah.
And I, what I appreciate about this conversation and love talking to economists,
because as an undergrad economist, so not nearly on any level like y'all, but
what I always loved is we're talking about numbers.
We're trying to quantitatively analyze things. We're trying to
do really grounded empirical analysis.
And Nathan, from the vantage point of economists generally, where
do we need more data? Where do we need more information? What would make your
job easier?
What are some of those kind of big gaps in information that
would allow us to have a little bit more certainty about how AI is impacting
the labor market and how it may do so in the relatively near future?
Nathan Goldschlag:
It's a great question. So I did spend the majority of my career, almost 17
years at the US Census Bureau as an economist studying firm dynamics and AI as
well. And so while I was there, we designed new survey questions for firms to
figure out who was or wasn't using AI.
One of the things that came out of that work, which sometimes
surprises people, is that AI use rates among firms is still like something like
eight, you know, nine or 10%.
It's still quite low. It can be really high in the information
sector, something like 25%. But overall, the use rates are still pretty low.
But in, you know, in terms of the data we would need, I think for, you know,
deeper measures of adoption, right, so one of the, one of the questions that we
asked in those surveys is sort of, due to the adoption of AI, how did your
demand for skill or demand for labor change?
There's lots more we can do with those types of questions to
kind of get, you know, self-identified causal estimates from the firm. Because
the firm is saying, you know, ‘I use this technology and this is the
consequence of that.’ So having larger panels, but then also getting additional
data at the worker level.
There's a couple different surveys that went out and tried to
ask workers, do you use AI, you know, generative AI in the past two weeks at
work or something like that?
You get a much higher, you get a much higher use rate in that
case. Something like 25 or 30% or something like that. You know, it's higher
than the firm level use rate, but it's not entirely clear what that use is.
So if you're just using it as a substitute for Google, that's
not going to generate the sort of productivity effects that economists
typically think about. So, additional measures of how firms are using AI, how
workers are using AI, and how it's impacting, you know, the demand for labor
and skill. I think those are going to be really key.
And the most important bit, right, from an economist's
perspective is to have longitudinal data, right? We need to be able to see
adoption and de-adoption over time, and to sort of get a sense of how the
adopters look different from the other firms that don't adopt, and then how
their trajectories change over time.
Kevin Frazier: I love
the question of adoption so much because the generic question of ‘are you using
AI?’
I mean, sure, yeah. I made a Studio Ghibli meme last week and
it was hilarious, and then I was on Sora for five minutes or whatever. Sure, I
adopted AI, but to your point, Nathan, that's not the transformative economic
use case that we really need to get at.
And what I think is telling as well is we keep seeing these
reports. For example, from MIT a few months back, about 95% of all AI
institutional adoptions failing. Which, I know, is a study that's very much
contested, but it's not asking for example, well, did you do any work to train
your employees how to use AI and to actually adopt the tools that were best
suited for the task at hand?
So this is just such a complex question. And Anton, I wonder,
for you, when you are starting to think about your research agenda, what's top
of mind? What are you studying now to try to provide some more clarity around
these really weighty questions?
Anton Korinek: I
think the work that Bharat and Nathan are doing is really crucial to give us a
view of what is going on right now. Or to be honest, if we look at data, it's
always a view in the rearview mirror of what has been happening in the recent
past, right.
And I think the crucial thing is we have this technology that
is rapidly evolving and getting better so quickly, and we need to be prepared.
For future scenarios that we can't quite see in the data yet. So in order to
get a better sense of what's ahead, I think it's useful to distinguish on the
one hand, between the frontier model capabilities.
And then on the other hand, this crucial question of diffusion
that Nathan was just speaking about. If we only studied the diffusion with a
rearview mirror, it tells us what has happened, let's say, three months ago, six
months ago. But it doesn't tell you what we should prepare for in six months
from now, or 12 months in the future.
And so observing what is happening at the frontier, what I see
is that models are getting better very quickly. I believe that the labor market
effects at firms that are using the frontier level capabilities are going to be
much starker. Then what we see, let's say for the first three quarters of 2025.
Now there is also a lot of uncertainty about it. That's kind of
the downside of trying to predict the future, right? But I think, given that
the technology is evolving much more rapidly than any prior technology, it is
crucial to kind of embrace that uncertainty and to make sure that we are also
prepared for radical scenarios.
So one way of doing that I am engaged in in my work is scenario-planning,
trying to predict, what would scenarios in which AI rapidly advances to human-level
capabilities across many different areas of application––what would they imply
for the workforce, for productivity, and for other economic measures?
Kevin Frazier: Yeah,
it's certainly a fascinating thing we're seeing play out across all of these
professional domainsm, of anticipating ‘if we had X occur tomorrow, if AGI is
reached tomorrow, if superintelligence occurs tomorrow or in the near future, how
do we respond?
And that obviously raises a whole can of worms about, okay, if
for example, Amazon does lay off all of its software engineers, or a large
fraction of them next year or two years from now, that's a lot of smart youth,
smart men in particular, just hanging out in Seattle.
What the heck are they going to do? What does that mean for the
future of the country? So on and so forth.
But Bharat, I wonder from your perspective and research agenda,
what is driving your consideration of helping policymakers respond to this
moment? Because one of the things that we've discussed across this trio is over
the long term, there's always been a sort of correlation with technological
progress and technological adoption, aligning with societal wellbeing and the
general welfare improving, all else equal.
The country that leads with technology generally has a stronger
economy and more prosperity, but it's not exactly a compelling pol. It's not
exactly a compelling political message to get up on your soapbox at that
Seattle public market and say, look, Amazon employees, you'll be fine. In 10
years the U.S. is going to be a leader in GDP, so just buckle up and hang on.
So how can you all, as economists, help figure out this very
difficult and somewhat unavoidable trade-off, right? We can talk about UBI and
some of the other solutions that people may have to this down the road, but in
terms of just the historical precedent of saying, ‘we know tech progress leads
to some of these trade-offs, how can we navigate them,’ what are your
suggestions or what are you thinking through some models for policy makers to
keep top of mind?
Bharat Chandar:
Great. I think Anton raised a great point just now about if we want to do
projections going forward, we can't just look at historical data. We also need
economic models to interpret the data and do predictions about what might
happen going forward.
And just like him, I'm also interested in doing some scenario-planning
around that, particularly on the labor market side. And the nice thing is
because we have years and years of research in this space, there are existing
modeling tools that we can use to think through potential counterfactual or
simulated labor market impacts on different sectors and how that might affect workers
going forward.
So that doesn't mean that we're going to see this mass
disruption in the next several years, but we can ask questions like, if we did
see this, how might that affect the economy in equilibrium? Like, how will
people shift across different sectors of the economy, different occupations, et
cetera?
And I think an implication of that is we could start thinking
through what would be policy responses. There's obviously been a lot of
discussion about UBI, but we could probably be more creative about labor market
policies as well, and how that might affect the trajectory if we do start
seeing a greater job displacement.
So that's definitely on research agenda for me. How do we think
about modeling these implications? What could the potential impacts be
depending on the level of AI progress that we see?
What are the sectors that we expect to be more impacted? And
then how will that reshape how workers, you know, switch jobs across the
economy, and where we might expect a greatest growth and then the greatest
declines?
Kevin Frazier: And Bharat,
just to stick with you for a second, for the early-stage professionals in
particular, imagine you've got a 18-year-old daughter.
She's on the precipice of going to Brown or some other-grade
school. What is your advice, I'm sure you get this all the time of, do you go
into computer science? Do you go into you know, canoe trip planning?
What is the major that you would recommend, or how would you
advise just young professionals navigate this turbulent time, especially given
your research in the “Canaries in the Coal Mine” paper?
Bharat Chandar: So
let me start by just talking as an economist for a little bit.
I do think that we know very little about what's going on the
education front, and I think that's kind of crazy. Frankly, I think we don't
know how AI is impacting students' choice of major or this choice of career
that they want to pursue after school.
We don't know very much about how it's impacting their learning
and what are they learning in school, how is it changing curricula, et cetera.
So I think there's definitely a lot of need for research in that space.
Now, to directly answer your question, I think the answer that
I would give is that these technologies do create a lot of opportunity that we
could have never imagined in the past.
I think you can build things, learn about things in ways that
were just not possible before. You can learn about a topic, ask an expert any
question that you want, and get almost a perfect answer immediately.
And I think that's unbelievable. You can build a website from
scratch. So I do think that there are opportunities for building things,
learning things that we could have never imagined before.
And it would be, you know, I would definitely encourage young
people to make use of those tools.
I didn't have that when I was, you know, entering undergrad.
Kevin Frazier: Okay,
so I heard you recommend Canoe Trip as the recommended major.
Bharat Chandar: I'll
just, I'll pass. Yeah. Building those social skills. Yeah. Management skills.
Kevin Frazier: There
you go. There you go.
So Nathan, you have a unique background with that census
portfolio, and the understanding of what are the questions we should be asking,
or what are the questions policy makers should have top of mind of tracking?
And I wonder when you are thinking about the statistics that tend to dominate
the headlines in terms of unemployment rate, or even statistics, as Bharat’s
research highlighted, about unemployment among specific groups and adoption
among specific communities, what statistics do you think we're not paying
enough attention to or trend lines?
You know, what data for policymakers would be the most
influential in guiding some of these key decisions over the next few years?
Nathan Goldschlag:
So, I think part of Bharat’s answer started to hint at this idea of
reallocation, which I think is going to be really important.
So, you know, one of the, one of the things that you know, may
come as a surprise to your listeners is that there's been like a, something
like a 35-, 40-year decline in business in the United States. And it's not just
the United States, by the way. Most developed countries have experienced
decline in the rate of new businesses, of people changing jobs, job creation,
job destruction.
All those different measures have been declining for years and
years after COVID. We had sort of this surprising uptick, right, where there
was like a increase in entry, lots of new business formation. Some of that was
in response to sort of new opportunities that were, that, like new internet-based
businesses, that sort of thing.
But it did sort of. It was a spark of like, maybe this is
something that could be sustained and if it was, by the way, would be really
important for growth, right? Because new businesses are usually in more likely
to introduce new ideas. They play a disproportionate role in job creation.
Now, all of that is to say, right, what does that mean? What
does this mean for AI? I think the fir––you know, first things first would be
measures of dynamism, right? So a lot of the discussion that we've had so far
hints in one way or in another of the, at the reallocation of labor and capital
that's going to be induced by this new technology, right?
And so if there are new production processes that firms could
adopt that would make them more productive, either augmenting or substituting
for labor, all those different things are going to involve some form of
reallocation of resources.
If it's the case that certain types of degrees become less
valuable because they're, you know, the overlap and the substitution effects
are so strong, right, that's going to involve reallocation of degrees as well,
where as students enter, to enter college or, you know, post-secondary
education, they might be sort of shifting the composition of the types of
degrees they get. Same thing could be happening to the occupational
distribution.
So I think the statistics that I would say to watch are the
ones that are based on reallocation and people making choices based on a changing
landscape that they're facing, changing incentives.
And the same thing, by the way, is, it would be my answer for
the 18-year-old that's thinking about what degree. So it's sort of like you,
you need to have an eye on the ways in which whatever you choose to do is going
to be impacted by AI and the ways that it can potentially improve the things
that you do, but then also a heavy emphasis on communication and interpersonal
communication skills and being able to sort of lean into those soft skills that
are going to likely remain sort of something that humans are better at.
Kevin Frazier: And
this reallocation concept in dynamism more generally, I think is so important
to call more attention to, because folks just aren't used to talking about
these terms of saying, well, a healthy economy isn't necessarily one in which
you have the same job for the entirety of your career at the same company.
In fact, it's arguably way better on the aggregate to have as
much entry and exit of firms, as well as your ability as a worker to move
across state lines to move into a new profession, so on and so forth.
And yet there's a lot of stickiness, both with respect to the
firms we have in the market currently, and with respect to ourselves as being
able to move to a new state.
As someone who's moved to, God, what does my wife remind me of?
I think it's seven states that we've moved to, it sucks. I hate it. I can't
tell her how much I hate it or else we'll never get to move again. But no one
likes moving.
But if you want to have that economic dynamism, especially in a
time where we're going to have new jobs get created for new markets that no one
can predict, that's all the more important.
And so, Anton, I know the temptation when talking to a group as
smart as you all is to say, please solve this. What is the bullet, the silver
bullet solution to alleviating everyone's concerns about the AI economy? But
I'm going to flip that on its head because I'm a nice host, and instead, I
would like you to identify one or two of the worst ideas you've heard.
What is a solution that you, a solution in air quotes, what is
one of the solutions that's been offered that you just think, oh my gosh, if we
follow that, ugh, bad things ahead. What is one of those for you that stands
out and why?
Anton Korinek: Well, Kevin,
if you allow me, I will start by taking a bit of a step back.
Kevin Frazier: Go, go
for it.
Anton Korinek: The ultimate
objective, both in our economic models and what I suppose all of us economists
are striving for is human welfare, not GDP.
In many times those two kind of move in tandem. People
generally don't like recessions, which is when GDP goes down. People love booms
when GDP goes up and all boats are lifted. But sometimes the relationship does
not hold one for one.
And especially if this technology turns out to be really labor-displacing,
then it may be one of those episodes. So coming back to what are bad ideas? I
will say this a little bit tongue in cheek because you focus so much on it, and
it depends very much on which scenario we are going head into the future, but
so there is this possibility out there our leading labs are going to create
something like AGI, artificial general intelligence,. Which by their
definition, if you look at, let's say OpenAI's charter would imply AI systems
that can effectively accomplish most economically valuable jobs. So if they do
reach that, and right now we have to all acknowledge there's a tremendous
degree of uncertainty about it.
And it's speculation. It's no certainty at all. But if that
happens, then at some level labor, which would now have to compete with these
machines that can do essentially most economically useful tasks would be
fundamentally devalued. And then just prescribing more dynamism, as you just
did, may be a very bad idea. So in the short term, while labor is extremely
valuable. I agree that may be a good prescription.
Although we should also emphasize that people generally like
some stability, right? So we don't want just unfettered dynamism, everybody has
to change their job every day. Because stability is also something valuable in
people's lives, especially if you try to raise a family, for example.
Now, but if our economy fundamentally changes. If it becomes an
AGI-driven economy, then I think we have to acknowledge that the role of labor
will have to decline and just telling people to be more dynamic would not be a
solution.
Kevin Frazier: So Bharat,
I'll turn the question to you. The worst or among the worst ideas you've heard,
and I will recognize Anton's wonderfully phrased caveat of course, the worst
solutions today might actually be some of the best solutions in a different
future.
But in terms of any ideas you've heard so far that you just
want to yell from the rooftops, ‘please no,’ what comes to mind for you?
Because I think that your very practical research has received a lot of
headlines and a lot of attention from people who are interested in a lot of
these policy conversations.
Bharat Chandar: Well,
I think one side of that is that I actually haven't seen very many serious
policy proposals for dealing with AI, and that might be because we're in the
very early days of studying it. I do think, you know, we've heard a lot about
UBI, which is, you know, not the most creative solution in the world. And I do
think, you know, we value work for reasons other than just money.
You know, I think it's a useful thing to consider, but we can
also be more creative about policies that we think about. I do think one idea
that this isn't just relegated to AI, but I think the de-growth idea is
probably not a very good one. So the idea that we should stop economic growth
or even scale back our you know, technology or material wellbeing because we're
worried about impacts in various domains, including potentially AI, I don't
think that's a very good one.
I think there're––if you're worried about certain aspects of
the technology, there are policy proposals that you could consider that might
mitigate those without compromising the wellbeing and the growth in material
consumption for future generations in a way that could be very harmful.
So that's probably one that I would mention.
Kevin Frazier: Yeah,
and that really builds well on Anton's point of while GDP growth and economic
welfare may not always be one-to-one, certainly GDP growth, declining and
economic welfare usually are not aligning very well. So thanks for sharing
that.
And Nathan, no pressure, but I expect a home run from you.
You've had the longest talk, all right, about this issue. So
blow me away. What––
Nathan Goldschlag: So
it’s not, it's not fair. It's not fair because Bharat took the easy one that is
low hanging fruit. Degrowth is a terrible idea. Okay. Alright. So that's the
easy one.
I'd say there's a, there's an interesting tension in this
discussion, which, you know, I would say it's not necessarily you know, some
specific policy idea, but there's this, you know, idea that kind of floats
around the background of folks that are really concerned about AI that’s, ‘can
we freeze the economy in amber, right?’
There's a lot of, you know, sense that, well, the occupation
distribution we have now is somehow a sacred cow that we need to make sure
persists, right? And I think that's a mistake, right? I think David Autor has
this nice summary paper of like a hundred years of automation research and,
alright, so something like 40% of employment was in agriculture in 1900, and by
2000 it was like 2% right?
And so, you know, the concerns about like the total
substitution of labor, they, that sort of makes sense. But you know, if we're
sort of in the normal realm of the evolution of technological change, and maybe
it's a little faster, maybe it's a little more intense, there's more occupation
mix, switching, like all that sort of things.
So long as we're within the normal bounds of what we've
experienced in the history of technological change, right, I think you know,
the––you'll want to facilitate the reallocation rather than trying to freeze
the current rea-, the current composition of the economy in amber.
Kevin Frazier: Yeah.
And I love that because. I wasn't being facetious, or totally facetious, when I
would brought up the horse pooper scooper.
I went back to the New York Times archives and tried to find
any coverage of, you know, the protests of the horse pooper scooper, or let's
protect the horse pooper scooper. You don't find it. There's not a, you know,
we need to make sure that we look after the livelihood of this specific
occupation.
Should we look after the livelihood of those individuals? Of
course. But this is a really complex question, so with that said, one final
kind of lightning around it, and I'll just defer to anyone who wants to raise
their hand and suggest an answer here, which would be, is there a jurisdiction
today that you think is approaching the AI and economy question well?
And this can either be from a data-gathering standpoint or from
a policy adoption perspective. Is there a jurisdiction who you would say, more
folks should know about how X is handling AI and the economy?
Nathan Goldschlag:
I'll take a shot at this, you know, shameless promotion of the Census Bureau.
Kevin Frazier: Love
it.
Nathan Goldschlag: I
think you know the history of measuring technological change at, you know,
through the federal use of federal statistics.
It very often is the case that the stats agencies start
measuring it after it's basically already fully diffused. Right. There, you
know, there's not a lot of cases that you'll find where the federal statistics
were sort of front looking around the corner at what's to come. And I think
with AI and robotics it, AI, the Census Bureau actually caught it, right, so
there was questions that were asked in the 2018, or 2017, or 2018 ABS, annual
business survey.
So they've been asking questions about AI use since before ChatGPT
hit the scene. And so that actually lays the foundation for economists to
really understand, measurement––measure, understand the measurement of the
effects of AI in a much more robust way, having that longitudinal, sort of,
before the release of those technologies and then afterwards.
So I think that in the case of federal statistics, we have sort
of a unique opportunity in the measurement of AI to sort of see how these
things are affecting firms and individuals in real time.
Kevin Frazier:
Wonderful answer. And I will accept the shameless self-promotion. That's always
welcome among academics. Anton, Bharat, any, anything coming to mind? If not,
that's okay.
Bharat Chandar: Alright,
Anton's going to pass. I do think I, I have a quick answer here. I do think
that we benefit from the fact that the research in this space is often made
publicly available. And that's true both at academic institutions but also at
the lab.
So there's definitely a lot more data that we could ask the
labs to report, but I do think it's kind of great that they, a lot of the
research that's going into the production of these frontier models, that's
being made public.
A good example of this is GDP valve from OpenAI, which is kind
of documenting improvement across tasks specific to occupations over time. And
you know, they're kind of tracking that the tasks that are, that apply to
different occupations, the models are improving pretty quickly in their ability
to do those.
And I think that's quite useful. Anthropic is putting out the Anthropic
Economic Index, which gives us a sense about how different occupations are
using AI. And of course there's a lot of research coming out of the academic
community as well, documenting some of these facts.
So I do think it's great that we have some insight into this
because the research is fast-developing, and it's also kind of public, and
something that we can look at as economists or other scientists who want to
study the capabilities and other changing, you know.
That said, we can definitely ask them for more data in
different domains. I think we all agree that we need to be doing better data
collection, but I do appreciate that this is a space where there's serious
scholarly query and public dissemination.
Kevin Frazier: All
right, Anton, we'll give you the final word.
Anton Korinek:
Alright, I'll add one more quick point which builds on what we've already
discussed before.
So I think economists are kind of inherently uncomfortable
making too many predictions, and especially if this changes super quickly. And
that's why I want to reinforce this need for scenario-planning.
And I think it's so important, even though we are uncomfortable
with, even though we have to make lots of assumptions and, of course, most of
our predictions are going to turn out to be false. I think it is crucial to
engage in scenario-planning to head into this rapidly changing future.
Kevin Frazier: Well,
I had a scenario plan in my head of when three economists walk into a podcast,
what the heck happens?
And I had to admit most of those futures, I thought, ah, this
may be a little boring. This was awesome. You all rock.
I will look forward to the day of having you all back on to
share your next research, but thank you all again for taking the time.
Anton Korinek: Thank
you so much.
Bharat Chandar: Thank
you.
Nathan Goldschlag:
Great to be with you.
Kevin Frazier: Scaling
Laws is a joint production of Lawfare and the University of Texas
School of Law. You can get an ad-free version of this and other Lawfare
podcasts by becoming a material subscriber at our website,
lawfaremedia.org/support. You'll also get access to special events and other
content available only to our supporters.
Please rate and review us wherever you get your podcasts. Check
out our written work at lawfaremedia.org. You can also follow us on X and Bluesky.
This podcast was edited by Noam Osband of Goat Rodeo. Our music
is from ALIBI. As always, thanks for listening.
