Lawfare Daily: A Right to Warn: Protecting AI Whistleblowers with Charlie Bullock

Published by The Lawfare Institute
in Cooperation With
In the wake of controversy over OpenAI’s restrictive nondisclosure agreements, a bipartisan group of senators has introduced the AI Whistleblower Protection Act. In this episode, Lawfare Research Director Alan Rozenshtein spoke with Charlie Bullock, Senior Research Fellow at the Institute for Law & AI and co-author of a new Lawfare article on the bill, about its key provisions. They discuss why this bill is an important, light-touch proposal that offers a way to increase government access to information about AI risks.
They cover two of the bill's most important features: how it fills a significant gap in existing law by protecting disclosures about “substantial and specific dangers” to public safety, even if no specific laws have been broken, and how the bill prevents companies from using contracts and NDAs to waive the whistleblower rights it creates.
To accompany the episode, be sure to read the new piece by Bullock and Mackenzie Arnold, "Protecting AI Whistleblowers.”
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Charlie Bullock: Our
legal system hasn't yet caught up to the rapid progress of this technology, and
we haven't had a chance to decide whether that should be a violation of law or
not. So we would still ideally like people to be able to report that kind of
danger, substantial and specific danger, as the language statute uses, to
public safety. That's the big thing that the statute does, is it, it fills that
gap.
Alan Rozenshtein:
It's the Lawfare Podcast, I'm Alan Roznshtein, associate professor at
the University of Minnesota Law School and Research Director at Lawfare
with Charlie Bullock, senior research fellow at the Institute for Law and AI.
Charlie Bullock: I
can't see downsides of this in terms of like being some huge giveaway to
industry or something, right? It doesn't really benefit them in any concrete
way. At most, it, it doesn't go far enough.
Alan Rozenshtein:
Today we're talking about the new bipartisan AI Whistleblower Protection Act, a
Senate proposal that's designed to fill the gaps in existing law by protecting
employees at artificial intelligence companies who blow the whistle on the
dangers of advanced AI.
[Main Podcast]
Before we get into the details of the bill, I wanna start with
some table setting. So first basic question, how do you define whistleblowing?
Not necessarily as a legal matter which we'll get into later, but just as the
phenomenon that you think there needs to be some protections for.
Charlie Bullock:
Yeah. Essentially I view whistleblowing as the act of a, an employee, or as the
case may be a former employee or as the case may be independent contractor at a
reporting corporate wrongdoing typically to the proper authorities. That's,
that's what whistle blowing is.
So the core example would be you're an employee at a company.
You notice that your company is violating the law, and you report that to law
enforcement. The idea behind whistleblower protections is that your employer
shouldn't be able to punish you for that, that action.
Alan Rozenshtein: In
the development of AI, it's in its modern incarnation, let's say the last three
or four years. What role has whistleblowing played in getting information
public? Like how, how important has this actually been, which is obviously a
separate question to how important one might think it will be in the future?
Charlie Bullock:
Yeah, the big example is the situation that occurred at OpenAI in 2024. In, in
that situation, employees at OpenAI, as they were departing from the company,
were pressured, threatened with withdrawal of their vested equity, which was
worth tons of money, you know, to sign these non-disparagement agreements.
These non-disparagement agreements were super broad, so
essentially it would've said: if you sign this, you can never say anything bad
about the company again, even if it's public knowledge indefinitely into the
future.
Alan Rozenshtein:
And, and even if it's true?
Charlie Bullock: Even
if it's true. Yeah. Especially if it's true, right? No, but okay, so this,
obviously employees weren't not happy with this, especially some of these
employees were, you know, working on OpenAI’s policy teams or whatever. They
were concerned about AI safety.
And so they threw a big fit. They published a call, an open
letter calling for a, a right to warn about frontier AI that got a lot of
traction. A lot of important figures signed onto this and, and pretty much
immediately that sparked political interest in it. Chuck Grassley, who's the
senator who's behind this recent bill that's been introduced at the time, said,
okay, we're looking into this. We're gonna see if employees at AI companies
need, need whistleblower protections. So that was the, the big example.
Alan Rozenshtein: So
I, I think that's a good example of, of how this issue has, sort of, has come
up. I, I guess I'm curious about whether there's been an example of actual
whistleblowing. Now maybe the answer is no, because these models aren't
powerful enough to yet be dangerous, so there's nothing, there's nothing to
blow the whistle about yet, even if there might be, you know, imminently.
But I, I'm just curious if in the past there has been someone
who has not just wanted to blow the whistle but felt pressured that they
couldn't because of some contractual agreement, but actually did so about
something. I mean, I, I seem to recall there was a Google employee who thought
that–
Charlie Bullock: Yeah.
Alan Rozenshtein: Gemini was sentient. I, I don't know
if that, I mean, is that, is that an example of what you're talking about? It's
an odd example. In a certain, in a certain sense.
Charlie Bullock: Yeah,
so that would not be protected by the current law. Right, because he was not
referring to a, a law violation or a substantial and specific danger, right.
Which is kind of, I'm sure we'll get into this later, but that
substantial specific danger language is designed to filter out exactly this
sort of thing, right? Like sort of unfounded or vague worries about what future
AI systems might do or something. Or in this case, yeah, the claim that Gemini was
like, this was like early Gemini. I think this was like two years ago or
something. Right? I wanna point out–
Alan Rozenshtein: This
might have been Bard or something even before.
Charlie Bullock: It
was not even a good model. Yeah, I think it was pre-Gemini, so not even, not
even a good LLM by today's standards or indeed by 2021 standards or whatever it
was. But yeah, he was claiming that it was so good that it was sentient and
worthy of moral consideration, so forth. Which I think in retrospect was a
ridiculous claim. But yeah, that was an, that was an attempt at whistleblowing
at least.
Alan Rozenshtein: So,
so that's what whistleblowing is. So let's talk about why we might just, as a
general matter, need whistleblower protections, you know, just naively, one
might think that there's no problem here.
Because generally speaking, I as a private citizen get to say
whatever I want. This is not like classified information. And so what, what is
the problem that whistleblowers sort of as a general matter get into that has
created this array of whistleblower protections that, that we have, or that we
might want to buttress in certain circumstances.
Charlie Bullock:
Right? So generally whistleblowers reporting corporate wrongdoing of some sort,
right? They can also report dangers, but typically, historically it's been,
it's been law violations, right? The government's interested to know if
companies breaking the law. Obviously companies don't like being reported for
violations because done something if there's valid whistleblower complaint
about them.
And so in that situation, they might take retaliatory action
against the employee, right? Fire them is usually what happens, right? You
know, you, you reported us to the SEC for all the various security violations,
security laws violations we did and the frauds we committed. So we are gonna
fire you. And that obviously is a big disincentive to blow the whistle if
you're gonna get fired.
You know, maybe society doesn't care too, too much about the
individual employees like salary and whether they get a slightly worse job or
better one, but of course the employee cares. So that's if, if they're affected
by it, it's a powerful disincentive to blow the whistle, which is why it makes
sense to protect them from that.
I mean, obviously freedom of contract is a really important
concept in American law as you know, and so generally most employees in this
country are at will, so you can fire whenever you want for whatever reason,
except right, except if there's statutory protection or something for it.
Alan Rozenshtein: So,
so it strikes me that there are actually maybe three distinct kinds of
repercussions that a whistleblower could get. And I'm just curious how
whistleblower law generally treats these, right.
So one possibility is the one you just talked about is just
being fired. Okay. Another is that some benefit is contingent on you not
whistleblowing. So the example you gave earlier about OpenAI employees needing
to sign a non-disparagement clause in order to get their equity vested. And
obviously if they whistle blow, that's disparaging and then they don't get
their equity, okay. That is not giving them a benefit, which is different than
a punishment, but obviously it has similar incentive effects.
And then there's the third, which is going after them, for
example, violating trade secret law, right? If the whistleblowing involves
that, and I'm curious about the third, whether the law already recognizes an
exception for whistleblowing. So by analogy, for example you cannot defame
someone if you, if you say true things about them, right? By definition, truth
is a defense to defamation. So is, for example, whistleblowing a defense to
otherwise applicable trade secret law.
Charlie Bullock: This
is great. We've gotten right directly within the first, like three minutes of
this podcast into the absolute deepest part of the weeds with this.
Alan Rozenshtein: This is Lawfare baby.
Charlie Bullock: Lawfare baby. Let's go.
Alan Rozenshtein: This is Lawfare.
Charlie Bullock: This is the, this is the sort of very
fine legal question that I spend tons of time thinking about and try to bring
up because it's, it's very technical deal.
But, okay so Chuck Grassley who is behind this bill, was also
around when the Defend Trade Secrets Act passed in, I believe, 2016. And the
Defend Trade Secrets Act is the current, like federal trade secret law, right?
It provides protections for trade secrets. It has a carve out for
whistleblowers specifically.
Now that carve out is limited to reporting about violations of
the law, right, because that in, in a lot of, not all, but in a lot of
whistleblower legislation, they only protect people who report violations of
the law. In this bill that's been proposed and in some other bills that exist,
right for example the Federal Whistleblower Protection Act, which applies to
federal employees, you are also covered if you disclose information about a
substantial and specific danger to public safety or public health or national
security, et cetera, et cetera, statutory language.
But one of the interesting things about this bill and one of
the few criticisms I have of the bill that Grassley introduced is that it
specifies that you cannot be punished for any lawful act that results in
disclosure of information about et cetera, et cetera. And so there's a real
question about whether reporting information that involves trade secrets to the
government about substantial and specific dangers to public safety is a lawful
act or not because it is not covered by the language of the Defend Trade Secrets
Act exception.
Now there's, there's reason to think that a judge might
essentially say, okay, well this is a lawful act anyways for reasons of public
policy. And the reason that we think that is because prior to the passage to
defend Trade Secrets Act, it was already a crime or, you know, unlawful in
various ways in, in many jurisdictions to misappropriate trade secrets.
But there is some case law from before the DTSA saying essential,
sure they misappropriated trade secrets technically, maybe, but we're not gonna
call that, we the court, are not gonna that an unlawful act because they did to
report a law violation that the policy dictates that we should not punish,
punish employees for that.
Alan Rozenshtein: Okay,
so this is actually a good segue into the kind of other background legal regime
that exists, which is whistleblower protection laws. Right? So you mentioned,
right, that there's this whistleblower carve out in this Trade Secrets law. You
mentioned that there's a federal whistleblower statute.
Obviously there's the California has its own whistleblower law
that I think is worth getting into in, in some depth because of course all of
the main research labs are in California and therefore that's gonna be the
primary jurisdiction here. So just, just talk me through what is the current
legal regime before we get to this proposed bill from Senator Grassley that
would apply to AI whistleblowing.
Charlie Bullock:
Sure. So you've got a patchwork of overlapping federal laws, state laws, common
law protections and whether an individual whistleblower's actions are protected
is, can be kind of complicated, right? Federally, there is no overall
background federal whistleblower protection law, right? It mostly is left up to
the states.
Alan Rozenshtein: And
do you know if that is, is that, do you know if that's a policy decision or,
you know, might that even be a sort of a constitutional limitation? Like, I, I
do wonder honestly, if, if Congress could even enact a super general
whistleblower law under the Commerce Clause.
Charlie Bullock:
That's you would know more about that than me. I, I have always assumed in the
course of researching this over the last, however long I've been working on
this, that it would be possible to pass a generally applicable federal
whistleblower law. I've seen people propose it, but I can imagine
constitutional arguments against it, for sure.
Anyways, yeah, there's, there's no background federal statute
saying you can't punish whistleblowers and state laws vary. Some states have
essentially no statutory whistleblower protection. New York, for example, is
pretty light in terms of whistleblower protections. A lot of Republican states
have, have very little, because, you know, they, they value freedom of contract
more strongly. They think, hey, if you sign this thing, saying they can fire
you for whatever, they can fire you for whatever right.
Now other states, including notably California, have much
stronger whistleblower protections. California's is not the most robust, but
it's up there, it's pretty, pretty strong as whistleblower protection laws go.
It's Section 1102.5 of the California Labor Code.
Alan Rozenshtein: For,
for, for those of our listeners who are following along, you open up your
California statutory book.
Charlie Bullock: Yeah,
yeah, of course. I, I assume, I assume that all of Lawfare–
Alan Rozenshtein: This is the Lawfare audience
after all.
Charlie Bullock: The greatest audience of the world.
So what that says is, you can’t discriminate or punish. It, it’s
funny, all these statutes, you know, you asked earlier about denying someone a
reward versus firing them versus other kinds of punishment. They, they try to
cover everything. And the way that statutes do this is with incredibly
repetitive language.
So the California statute is like, you can't make, adopt or
enforce any rule regulation or policy preventing an employee from et cetera, et
cetera. And then it covers all the things that you can’t do to them. What it
does essentially is say that for reporting to a government agency or law
enforcement agency, information about any violation of any federal, state, or
local law, rule, or regulation–right, so very comprehensive–but essentially
reporting the government a violation of some legal rule, you cannot punish or
fire.
That's what California has. And that's highly relevant obviously.
It's the primary current law that protects AI employees, right? If they report
a violation of the law, they, they can't be retaliated against for that, in
theory, under California law.
Alan Rozenshtein: So
I is California law kinda the only state law that really matters here? And the
reason I ask is because, you know, it is true obviously that most of these
companies, maybe I think all of these companies are headquartered in
California, but they also, some have operations in different places.
Obviously they have different campuses in different states. They
might have some remote employees. I mean, are there any other states where we
should care what the laws are? I mean, I know like Google has a campus in New
York, has a big tech center in Texas. Obviously Washington has Microsoft. I’m
just curious if any other states are worth thinking about here.
Charlie Bullock:
Yeah. I think California is the vast majority of what we're, of, what we're
concerned about. But I mean, you can imagine situations in which, yeah, an
employee in New York discovers something important, right? A remote worker
right discovers something important. And then whether the, the other states’
laws matter is a, is a question that depends on the individual state law.
Like I, I've looked at, you know, I live in Illinois, I've
looked at Illinois’s whistleblower law pretty closely. And it's broadly phrased.
On its face, it says like, it defines like employer, which is the, the, the
regulated category, as anyone who employs any employee in Illinois. Right? So
on its face, the Illinois statute looks like if you have any employees in
Illinois, which I imagine most of these big companies do, I assume there's one
Google employee in Illinois, then you are subject to this law.
Now, of course there are questions of personal jurisdiction and
stuff like that. I haven't done the, going through like how that all would
shake out. But there's, it’s at least plausible that like other state laws
would come up in the future. Also, you know, we don't know where all future AI
companies will be located so its possible that some of them will be in states
other than California.
Alan Rozenshtein: We
have this California law and we wouldn't be talking about a federal law if we
thought the California law covered all the issues. And so when you describe the
California law, you identify two important scoping provisions there. One is
that it only applies to whistleblowing about violations of law, and it also
only applies to whistle blowing hat is communication to the government. So are
those the two issues, in your view and perhaps in the view of policymakers in
Washington, are deficient?
Charlie Bullock:
Yeah, that covers most of it. I, I don't think the government thing is
deficient in this context. I'm in favor of limiting it to reporting to the
government in, in the AI context at least. Because while there is value to
making these things public, you can see why companies wouldn't want their
valuable trade secrets and so forth, disclosed just on the front of page of the
newspaper, right, where their competitors could see it and so forth. So I think
it, I am in favor of, of maintaining that limitation, which the, the proposed
bill does.
Another thing to mention is that I may have slightly
mischaracterized the California bill earlier. It also covers internal reporting
within the company. So either, either to your superior or somebody in the
company who has authority over you or to the government. Those are the two
things.
But you know, essentially the main, the main thing we're
concerned about is, is reporting to the government. There are other things that
matter as well, other than those two, sort of, scoping concerns. For example,
the California bill doesn't do a perfect job of handling things like the OpenAI
non-competes, right? Whereas the federal bill that's been proposed is much more
clear on that question of whether that kind of thing is, is enforceable or not.
Alan Rozenshtein: But
just going back to the California law then, is, is the main reason that this is
in the public conversation because it is just too narrowly about violations of
law and there's just a lot of stuff that we might want to empower
whistleblowers to communicate that is just not a violation of the law.
Charlie Bullock:
Yeah, that's correct. Essentially, that is, that is the main focus of the new
law. And the main reason that people are concerned about this is because AI is
an emerging technology. It’s very lightly regulated. And so, you know, it's
very plausible that dangers could arise from some sort of truly transformative
dual use technology that has all sorts of national security applications and
stuff like that. That a danger could arise that wouldn't be a violation of the
law, right?
For example, we can imagine that something like a more
reasonable version of the, the Google report that we discussed earlier.
Somebody who's involved in safety testing at OpenAI or Google or Antrhopic observes,
okay, during safety testing, ah, this new frontier model we have, that's
extremely capable more than any we've seen in the past, is extremely good at
helping people design whatever chemical weapons or something like that, right?
In fact it's really easy to jailbreak it and, and get it to give you, you know,
allow bad actors to take these very dangerous steps.
It's not necessarily clear that the company allowing it to do
that and then publishing the model would be a violation of law. You can make
the argument that it is, but it's, it's not clear that is because our legal
system hasn't yet caught up to the rapid progress of this technology, and we
haven't had a chance to decide whether that should be a violation of law or
not.
So we would still ideally like people to be able to report that
kind of danger, substantial and specific danger is the language the statute
uses, to public safety. That's the big thing that the statute does, is it fills
that gap.
Alan Rozenshtein:
Okay. So let's now finally get into the law itself. So walk me through it. What
are the main provisions of it? What does it purport to do? And just sort of
give us, give me your evaluation of whether or not it, it, it is aimed at the
right thing and whether or not it accomplishes what it is aimed at.
Charlie Bullock: So
the law is the AI Whistleblower Protection Act. It is sponsored by three republican
and three democratic senators, a bipartisan group. There's a companion bill in the
House that's sponsored by a republican and democrat. The chief sort of force
behind it is, is Chuck Grassley is the main sponsor of it. And he's been
working on whistleblower stuff for a very long time so this is right up his
alley.
The law essentially covers three different kinds of reporting
and protects them. That's the main thing it does. The three different kinds of
reporting are violation of any federal law. That's good. It's somewhat
redundant, maybe of the California statute that already exists, but it's good
to have that at the federal level because there's different remedies you can
seek in federal court versus state court and so forth. And also it's, you know,
nationwide as opposed to just specific to California.
The second thing it does is it protects reporting about
substantial and specific dangers to public health, public safety, or national
security. So that's a, a fairly demanding standard. It has to be substantial
and specific. It can't just be some vague future worry that you have. But if
you do identify a substantial specific danger arising from something to do with
the development or deployment or whatever of one of these models, then that is
protected if you bring that up and report that to the government.
The third thing it protects is reporting about AI security
violations. This has been a, a pretty hot topic recently as well. The idea of
lab security, the idea that we need to protect AI labs from having their model
weights, their algorithmic secrets stolen by, for example, China or by bad
actors here domestically. And so if you notice some big flaw in lab security,
that's gonna allow China to steal the model weights of GPT-5 or whatever, and
this would be very dangerous and, and, and bad. Then you can also report that
to the government, and that's covered as well. So that, that's, that's the main
thrust of what the law does.
In addition to that, it, it has a bit about the non
enforceability of, of waivers of any rights in the law. So essentially what
that means is stuff like the OpenAI NDA that we discussed earlier is
unenforceable under this. Any sort of like workplace policy preventing you from
talking about this stuff to the government is unenforceable with respect to
these, you know, legitimate whistleblower complaints.
And also anything like arbitration agreement is unenforceable
with respect to these, you, you have to be able to go to the Department of
Labor first, and then the courts, which is the, the sort of system of remedies
that is set. Like if you get retaliated against for doing this first, you can
make a complaint at the Department of Labor and then if, you know, that doesn’t
get responded to in a certain amount of time you can sue in district court.
That’s sort of the outline of what the bill does basically.
Alan Rozenshtein: Who
do you have to disclose to within the government? I mean, is it literally
anyone in the government or is there like a specific point of contact? How is
the reporting supposed to work for you to be able to get the benefits of this
whistleblower protection?
Charlie Bullock: They
elected not to name a specific point of contact. I think a lot of decisions
that were made about this bill were made in an effort to make it sort of as low
friction as possible. Like it, it doesn't wanna make big decisions about like,
which part of government is gonna regulate AI, which has been a big question in
the past for people who, who work on AI policy.
It doesn't wanna, like, you know, if you designate a hotline,
if you designated a government office that's gonna receive these complaints or
something like that, that’s sort of, committal and it requires the government to
do stuff. The idea behind this bill in large part is like, it doesn't impose
any obligations that anyone, except the negative obligation of don't retaliate
against employees for making these covered reporting things.
So the people you can report to are, it says the appropriate
regulatory official or the attorney general, a regulatory or law enforcement
agency, any member of Congress or any Committee of Congress. Or then as part of
testimony or assisting an investigation and so forth for the Department of
Justice and, and so on and so forth. It also covers, much like California law, reporting
to people within your company who have supervisory authority over you and so
forth.
Alan Rozenshtein:
Okay. So let, let's talk now about the, the scope of what it covers and the
sort of substantial harm to public health and, and that sort of stuff. So
that's obviously much broader than just violations of law, that's a good thing.
But it could be broader. And I know one might worry that it's not broad enough.
So, you know, again, going back to the, the example we talked
about earlier about, you know, what, if you think that a model has become
sentient and that it is suffering horribly, right, which you might really worry
about, or even maybe let's take a less high stakes example. You know, imagine
you're really worried that children are becoming addicted to these, to these
models, right? I mean, they're not, not in the sense of, and then they're
learning how to make biological weapons. But in the sense that, you know, we've
had lots of debates over the last 10 or 15 years, and we've even had
whistleblowers, right?
Maybe most notably Francis Haugen from, from out of Meta about
the effects of, of social media on children and what the companies knew about
that. You can image very similar situations happening regarding AI. Would those
things be part of, of the whistleblower protection? I guess the big, the, the
bigger question I'm asking is, you know, what are the trade-offs with how
broadly you scope whistleblower protections?
And I guess in your opinion, does this bill does get the trade
off right? Or does it air too much on the side of protecting trade secrets or
on the other side, you know, air too much on, you know, letting any disgruntled
employee say anything even remotely nasty about a company because they've
convinced themselves that it's the worst thing in the world?
Charlie Bullock: In
my view, I think it gets it right. My colleague, Mackenzie Arnold and I wrote a
blog post, I guess, a month or so back now about how, how to design AI
whistleblower legislation, and we suggested this substantial and specific
language. And the reason we said that was because I think a big part of the
appeal of whistleblower legislation is that it's truly bipartisan in this
context, right, not.
Historically, it's been more of a, a democratic priority
probably. But in this context, it has a lot of support from industry friendly
folks, from libertarian minded people, from Republicans. And so I think that in
order to maintain that situation where a lot of people are in favor of it, some
people maybe don't have strong thoughts about it, and then pretty much no one
is against it hopefully, or at least no one has expressed anything against it
so far that I've seen. You don't wanna make it as broad as it possibly could
be.
I, I agree that it's, you can imagine a broader law and like
for very safety minded people that might be better, sight? For example, there's
some concern that you might have very substantial worries about serious risks
and good evidence for them, but just because of the nature of technology, you
can't be too specific about what it's gonna do. You just know that its bad, but
you don’t know exactly how or what forms the harms could take. There’s some
question whether kind of reporting would be covered by substantial and specific.
Alan Rozenshtein: So
would an example of that be like GPT-7 can do automatic code improvement so
much that you think that like artificial super intelligence is nigh and like
that could involve us all being turned into paperclips, aybe, right. Depending
on what AI doomer track you read last time and like that's the sort of thing
that you'd want the government to know about. But you, you're not, it's, it's a
little too speculative to fall under this law. Is is, is that the kind of
concern you're, you're talking about here?
Charlie Bullock: Yeah,
it's least debatable, right, whether that would be protected by this law or
not. And I will say that I think for concerns like that, like the fact that it
might or might not be covered by this law doesn't mean that it can't be
reported, right? You can still anonymously whistleblower to some of the many
watchdog organizations that are getting set up to handle whistleblower
complaints from people at frontier AI companies, right?
It's still possible to blow the whistle. You just might get
fired for it if, you know, if your company thinks that you did the wrong thing.
Alan Rozenshtein: And
it's interesting and I, I, I guess just to not get sort of too game theoretic
about this, but, but why not? I guess that is a relevant margin to think about
how much protection you want. Right? Because if you're thinking about it like
the, the optimal level of whistleblowing is, is not necessarily to have no one
get fired, it's to not have people who would otherwise whistleblow to whistleblow.
And one might think that actually, you know, if you're worried
about this sort of speculative AI risk, that's exactly the sort of person who's
just gonna whistleblow anyway 'cause they're so freaked out. So you actually
don't need whistleblower protections. I, I, I guess, I mean, I, I dunno, maybe,
maybe, maybe I'm crediting Congress a little bit too much with this strategery,
but, but you know, if I, if I was, if I was, if I was thinking about sort of
how to optimally design a whistleblower regime, I guess that is one dimension
that I, I think about.
Charlie Bullock:
Yeah, I mean, in theory, if you think that you're gonna be a paperclip in six
months the idea of losing your very cushy job and having to go to slightly
less, you know, oh, I'm just a machine learning engineer. What am I gonna do
for work? I mean, it's, it's in theory that wouldn't be that big of a
disincentive.
But I mean, I, I think in practice sometimes it's surprising
how like immediate sort of concerns about comfort or whatever influenced
people's decisions. But yeah, no, I agree.
Alan Rozenshtein: I
wanna go back to this reporting question because, you know, I kind of mentioned
a little bit this question of, you know, is it enough to be able to report to
the government? And I’ll be honest, personally, that's sort of my big concern
with, to the extent that I wonder if this law is is insufficient, that's
obviously not a reason like to vote against it. This is obviously much better
than the status quo.
To me, it's actually not so much the scope of the law, but the
reporting. The reason I say that is because, you know, especially with the lack
of a lot of substantive regulation of AI, it's actually not obvious to me how
useful it is simply to be able to report to the government that such and such
model carries with it such and such increased bio, bio risk.
Now obviously, it, it's better for the government to know this
than not to know this and, and maybe I'm focusing too much on the current
administration. And my, I'll admit not terribly high opinion of it, though I do
think that's actually relevant because you know, if you take some of the
forecast seriously, a lot of what's about to happen world historically, is
gonna happen in the next three and a half years, which for better or for worse
is coterminous with the Trump administration.
But I guess my, my point is, I guess I would feel better, and I
would think that this law had a lot more bite in terms of actual AI safety, not
just telling the government things that it may or may not act on, if it also
applied not just to the government, but to the New York Times. Now, again,
maybe, maybe the trade secrecy concern is just too much in that regard, but
tell me why I should be less grumpy that this does not also apply to public
disclosure.
Charlie Bullock:
Yeah. So I think that's a perfectly valid argument and I think probably a lot
of people agree with you that a better law would be one that said there's a
right to warn generally. I think that anonymous reporting of the New York Times
remains an option, so that sort of mitigates the harm you're talking about. The
New York Times is good at protecting its sources and they take this kind of
report and they're, they have experience, you know, not, not ratting you out
and so forth.
I think that one reason you should not be too grumpy about it
is that I view this as a building block, right? I think that this is the first
substantial piece of AI legislation that's gonna pass the federal government,
hopefully, I mean, if it does pass, which it may or may not.
If it was right, the first really substantial piece of AI
safety legislation, then it, it should be a foundation for things to come.
Right? And I think information gathering has to be the first step in any sort
of comprehensive AI governance regime, right? You have to, you have to know
something about what you're regulating before you regulate it, right? If you
take big substantive steps early, you're gonna get 'em wrong because, you know,
science and technology scholarship, people talk about the pacing problem,
right?
Technology always improves faster than laws can improve or
whatever. So its important to improve government capacity to the extent that
you can. And the best way to do that right now is by improving the government's
ability to gather information. Right? So things like the Biden administration's
reporting requirements also go into this, but I think whistleblower protection
is a very important part of that.
So I, I agree with you that like, okay, you report to the
government and then what? Right. Currently the government has no ability to do
anything about anything. And, and the timelines are short enough that maybe
where we shouldn't be optimistic, the government will be able to do anything
about anything. In my opinion, there are some worlds in which the government's
capacity to regulate AI matters into the future, right. Including, you know,
good worlds where like, you know, we're not all paperclips or whatever. So
you're, you're building a foundation for future governance efforts.
Alan Rozenshtein:
Okay, so, so we've talked about, you know, ways in which maybe the bill could
have gone farther, but also maybe now let's talk about maybe some of the
downsides of just the bill going as far as it does.
Because look right, as the economists like to tell us, there
are no free lunches, there are all these interesting second order unintended
consequences and, and you know, we should think about that for this for, for
this kind of law and, and I guess for whistleblower protection law generally. Because
just like off the top of my head, I could imagine some unintended consequences
of a law like this, or maybe not unintended, but certainly negative
consequences.
So you could imagine if you have a very strong whistleblower
protection regime, then perhaps companies will hire fewer people because
they're worried about more whistleblowers. Or when they hire people, they'll
spend a lot more time screening for loyalty, however you screen that. Or maybe
once they've hired people, they will compartmentalize a lot more information
and silo that information. Because again, to limit the, the, the, the number of
people that will have that information, who might then whistleblow.
And, and that might actually lead to less disclosure overall
because again, in the absence of whistleblower legislation, you don't not have
whistleblowing, you still have whistleblowing on the margin.
Charlie Bullock: Right.
Alan Rozenshtein: So, you know, I'm, I'm curious what
you think about those arguments and whether any of them give you pause.
Charlie Bullock: I
think those are valid concerns in theory. I think that for the ones you brought
up to be relevant, companies would have to actually like be very concerned
about whistleblowing. I think at the moment it doesn't appear that they are,
right.
Alan Rozenshtein: I
mean, OpenAI was, OpenAI had that non-disparagement clause, so they clearly,
someone in the general counsel's office thought about it, even if they
backpedaled embarrassedly.
Charlie Bullock:
Sure. But my model of how that worked out is just that tech companies love to
be as broad as possible with their non-disparagement and non-disclosure
agreements just to cover their bases basically. It's like, well, why not be as
broad as possible if you can. And Open AI almost immediately, as soon as it
became public, rescinded those clauses and said, okay, we're not gonna enforce
them, which I don't imagine they would've done if it was a sinister plot.
I, I mean, maybe they just thought it would never come out. But
I think, yeah, I, that's my model of that, is that maybe companies like to just
prevent you from saying anything 'cause why not? There's no cost them for that,
but they're not, at the moment, it doesn't seem like AI companies are super concerned
about whistleblowing because if they were, they would have expressed some kind
of opposition to this bill. They really haven’t.
Alan Rozenshtein:
Also am am I right that just also the, the kind of culture and, and social
anthropology, for lack of a better term of the Bay Area, makes this kind of
overcorrection unlikely. Like my, my sense is that, you know, between the fact
that California also has very strong restrictions on, on like non-compete
agreements, which is why everyone circulates around the labs.
And as far as I can tell, there are like a thousand key machine
learning engineers who all go to the, you know, the same parties and I don’t
know are a part of each other's polycules. I, I don't know. I'm, I'm a, I'm
just, I'm innocent Midwest. I don't understand this, this, this west coast
world, that like already the information environment is like pretty porous.
Is that, is that like a, a, a fair statement about how the
current system works?
Charlie Bullock: I am
also an innocent mid-westerner. I lived in the Bay Area for, you know, a little
bit more than a year, and I, that's about as long as I can make it before I
headed back to Chicago. So I, I, I, I would, I hesitate to offer like,
authoritative opinions on the, the information porousness and stuff forth.
But I think there's something true about the fact that like,
there's a libertarian ethos there, right? There's an idea of like, I mean,
obviously business owners wanna protect their business interests. People take
trade secrets seriously. Stuff like that. People don't screw around with
patents. But I mean, having been a patent lawyer in the past, there's often a
dynamic of, like the tech people mostly care about the tech and then the
lawyers swoop in to handle the suing people about intellectual property and
stuff like that right. And the actual engineers involved are, are maybe less
concerned about about that aspect of things typically.
Alan Rozenshtein:
Let, let's, let's turn to the politics of this bill, which are perhaps more
straightforward than what would otherwise expect. You know, as you pointed out,
this is a bipartisan bill and, and I guess let me start by asking why you think
that's the case, right?
I mean, you, you, you mentioned earlier that whistleblower law,
because it's kind of labor protective law, is often left-coded, and yet this
seems to have bipartisan support. Its main sponsor is Chuck Grassley, who's a republican
from another excellent Midwestern state, Iowa. Why do you think that this of
all issues seems to have reasonably smooth sailing through, through Congress?
If, if in fact it, it does. I mean, you should also comment on
whether you think that, that Congress will in fact enact this, this session or
next session, or some time.
Charlie Bullock: So
the first thing I'll say is that I, I do think it's true that it has bipartisan
support. Almost no one that I've seen dislikes the bill or like, is actively
against it.
It may not pass this time around. If it doesn't, it'll probably
be because, not because it has strong opposition, but because there's
insufficient, like it's an insufficient priority, right? Like it's, it's going
to the, the HELP Committee, which is like labor, right? And so. You have to get
it into markup in that committee, right?
And so time will tell if Chuck Grassley has enough pull with
the senator who's the head of that committee to get him to put it in for markup
or not. Right? And if he doesn't, then it's not because that senator hates this
bill, it's just because he has other bills that he wants to bring up for markup
and there's limited spots for this sort of thing and so on and so forth. So
that’s my sort perception of the politics of this.
In terms of why the politics are that way, I think it’s a
combination of factors. I think for one thing, the great man theory of history
scores another point here. It's just the idiosyncratic policy preferences of
Chuck Grassley personally. He's a very senior and important senator. I, you
know, you could try to explain those in terms of like sweeping historical
trends and stuff, but I, I think it's just, you know, I, if you've seen how the
guy tweets, he's his own person.
Alan Rozenshtein: Chuck likes whistleblowers.
Charlie Bullock :He likes whistleblowers. There's a,
there's a great, great tweet by him about this that, that you should all look
up. It's in his characteristic style.
But okay, another factor that explains how, why other
Republicans are maybe more behind this than in the past is that in recent years
there's been sort of increasing conservative skepticism of big tech because of
perceived censorship of conservative opinions, right.
I'm not the expert on, on this topic, but people felt very
strongly, and I, I think there was some basis for it that prior to Elon Musk's
acquisition of Twitter, there was a perception on the right that conservative
voices were being censored on Twitter, which is part of the reason that he
acquired that platform was to promote free speech and so forth.
There's also similar perception, I remember the James Damore
memo at Google was a big concern. People on the right thought that he was
saying sort of straightforward, the correct things and he was canceled for this
and et cetera. So there's that current is also behind the willingness of
conservatives to go after big tech for stuff, right? Josh Hawley, for example,
is one of the supporters of this bill in the Senate, and he has been vocal
about that sort of thing. So I think that plays into it as well.
And the final aspect of it, I think is that if you wanna do
preemption, which a lot of Republicans do, because they believe that a
patchwork of state regulations is the wrong way to handle AI regulation, right.
They think that it should be a single federal standard, and then states should
get outta the way.
Alan Rozenshtein:
And, and just to clarify, we're talking about preemption of AI regulation
generally.
Charlie Bullock: Yes.
Alan Rozenshtein: At
the state level.
Charlie Bullock:
That's right. Yes. You know, I mean, future preemption bills could take various
forms. They might be more narrowly tailored than that. They might or might not
be general, right. But if you are a proponent of broad preemption of state AI
legislation. It helps politically to be able to say, okay, well, there's
already federal legislation, at least some federal legislation, some federal
regulation that addresses this. It's not a complete regulatory void.
So that the Jay Obernolte, who's sort of the House mastermind
behind the moratorium, the preemption measure that's in the, recently enacted House
reconciliation bill is also, he's the House sponsor of this whistleblower
legislation. So I think that points to the idea that it's also appealing for,
for those sort of secondary political reasons.
Alan Rozenshtein:
Okay. I wanna finish off by talking about industry and its reaction to this. I
think you mentioned that industry also seems reasonably supportive of this, or
at least not gungho opposed to it. And so I’d love for you to talk about why
you think that is. And also if industry supports this, should that make us
worried that it's a bad bill? In other words, if industry supports this, should
we worry that this bill is actually too weak?
Charlie Bullock:
Yeah, my perhaps naive opinion is that there are win-win situations sometimes
in life, right? There are dead weight losses and there are stuff that's good
for everyone, right?
I don't know that this is necessarily like a huge win for
industry having this, but it's, it's in my mind, such an obvious common sense
thing to do, right? Like, if you think about what we're actually saying, we're
saying if there is a legitimate danger to public safety, people should be able
to tell law enforcement about it. Like that's, if you oppose that, you're gonna
look ridiculous.
Alan Rozenshtein:
Yeah. But, but it, it, industry frequently opposes things that are good for the
public and they're happy to look ridiculous, you know, if they think that it's
in their interest.
Charlie Bullock:
You're right, and, and so maybe that's a sign that this bill is to some extent,
toothless, right? Because it doesn't actually impose any costs on them and
they're not worried about it. Now, that could be a good sign. Maybe the fact
that they're not worried about it means they're thinking, we're not gonna
violate any laws, we're not gonna impose any substantial dangers on the public.
So what's there to worry about?
Right. If so, good. Right? Then the worst thing that's happened
is that we passed a bill that doesn't do much. But yeah, I, I can't see
downsides of this in terms of like being some huge giveaway to industry or
something. Right. It doesn't really benefit them in any concrete way. At most
it, it doesn't go far enough, but, you know, show me the piece of AI
legislation that we have right now that has a good chance of passing that does
go far enough and then I'll happily support that instead.
Alan Rozenshtein:
Well, I think that's actually a good place to, to end it Charlie, thanks for
coming on and, and we'll be sure to have you on, on again if this thing ends up
getting through Congress.
Charlie Bullock: Thanks
so much for having me, Alan. Really good talking to you.
Alan Rozenshtein: The
Lawfare Podcast is produced in cooperation with the Brookings
Institution. You can get ad free versions of this and other Lawfare podcasts
by becoming a Lawfare material supporter through our website,
lawfaremedia.org/support. You'll also get access to special events and other
content available only to our supporters.
Please rate and review us wherever you get your podcasts and
look out for our other podcast offerings, including Rational Security,
Allies, and Escalation, our latest Lawfare Presents podcast
series on the war in Ukraine. Check out our written work at lawfaremedia.org.
The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As
always, thank you for listening.