Scaling Laws: Values in AI: Safety, Ethics, and Innovation with OpenAI's Brian Fuller

Published by The Lawfare Institute
in Cooperation With
Brian Fuller, product policy leader at OpenAI joins Kevin on the challenges of designing policies that ensure AI technologies are safe, aligned, and socially beneficial. From the fast-paced landscape of AI development to the balancing of innovation with ethical responsibility. Tune in to gain insights into the frameworks that guide AI's integration into society and the critical questions that shape its future.
Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.
This episode ran on the Lawfare Daily podcast feed as the Aug. 8 episode.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Intro
Alan Rozenshtein: When the AI overlords takeover, what are you most excited about?
Kevin Frazier: It's, it's not crazy, it's just smart.
Alan Rozenshtein: I think just this year, in the first six months, there have been something like a thousand laws.
Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it?
Alan Rozenshtein: AI only works if society lets it work.
Kevin Frazier: There are so many questions have to be figured out and
Alan Rozenshtein: Nobody came to my bonus class.
Kevin Frazier: Let's enforce the rules of the road.
Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI, law, and policy.
Today’s episode takes you inside one of the most fascinating frontiers in tech, not just building AI, but designing the internal rules, systems, and guardrails that shape how AI enters the world. We're joined by a product policy leader at OpenAI, Brian Fuller.
He's embedded in the high stakes, high speed work of translating innovation into impact when the pace of model development accelerates, so too must the frameworks that ensure AI is safe, aligned, and socially beneficial. So how do you design policy in real time alongside engineers pushing the limits of what's possible, and how do you make sure those breakthroughs don't outpace the public interest? This conversation explores the art of asking the right questions before a product ships and sometimes asking them loud enough to slow things down.
[Main Podcast]
Brian, thanks so much for joining the pod.
Brian Fuller: Thanks so much for having me, Kevin. It's a pleasure to be here.
Kevin Frazier: So if we were talking to someone 50 years ago, a hundred years ago, who knows, and said, hi, I am Brian I work in product policy. We'd probably need to have at least a five-minute conversation. In, in modern times, I think more folks have probably seen that friend on LinkedIn who's celebrating becoming product policy at company X, Y, or Z, but for those who don't know or don't have that friend, what the heck is your job?
Brian Fuller: That's a great question. Okay, so product policy is the group at OpenAI that sits within this strategy organization, and so we're strategic. We broadly set the strategy for the company's approach to safety and integrity issues, and then we try to help that strategic vision get realized. And there are kinda like two things that we do to help realize that vision. The first is that we advise product teams that are building stuff at OpenAI. We help them navigate safety and integrity policy and sometimes legal concerns.
And the second thing that we do is we own all of the rules that govern user behavior on the platform. So what you're allowed to ask Chat GPT, and what you're not allowed to ask Chat GPT for example. And you know I am, I'm just gonna do a quick plug. I'm firmly of the opinion that product policy with an OpenAI is the coolest organization. I know that's saying a lot 'cause there are a lot of cool orgs, but this is a group of people that is just uniquely intelligent but also profoundly kind, which is such a rare combination to have together, like, you know
Kevin Frazier: I, I will say that–
Brian Fuller: Josh Watson for example, right.
Kevin Frazier: Full disclosure, full disclosure to the listeners and or watchers, I've had a beer, maybe two with Brian, and he is wonderfully enjoyable and frustratingly smart and it's that combination that drives you wild because you think you can only be one, and yet here Brian is, is definitely both and I think that you testifying to that about your colleagues is even more impressive.
But I want to come back to, to, to the actual product, I want to come back to the actual product policy role for just one second because I think a lot of folks hear the word policy and they assume, oh, Brian must be a lawyer, or exclusively works with lawyers, or is interfacing with lawmakers themselves or regulators. But you mentioned this strategy component. Which necessarily, as you mentioned, brings in some questions of what's profitable, what's best in the short run and the long term for the business interests of OpenAI. So how do you weigh, or what are some of the factors that are driving your policy decisions and how does that kind of influence how you all come out on these very weighty, very significant questions.
Brian Fuller: Yeah, no, that's a great question. Okay, so this varies product by product, right? Because the company's business objectives change product by product, like for Chat GPT, realistically, the company's long-term objective is to grow the number of people who are using that product, demonstrate greater utility in that product, and so it's a really long-term vision that you need to bear in mind. Whereas there are other products that we might release that are a shorter term vision with more concrete milestones that we're trying to hit on an accelerated timeline. And so you do have to balance out business interests against a host of other factors.
The other factors that you're really strongly looking into include privacy considerations like how much data is used? In what way is it used? Is it effectively communicated to users how that data is gonna be used? Integrity considerations, like how strict are we setting up the, the systems that are gonna be looking into what people are asking of a model. When are people penalized? When are they not? How are they informed that they're about to be penalized? 'cause of course the objective isn't to penalize people, it's to help people to do the right thing, o how do you set up those systems? So like it's, it's really complicated, but it's all informed by two big overarching things.
The first is the company's longer term, like top level business goals, and the second is the regulatory and policy making environments in which we find ourselves. So knowing what policymakers are thinking about talking about and caring about is ultimately really important to do this job well, because you can't play chess effectively if you don't know all of the moves that are being made. So strategy, yeah.
Kevin Frazier: Yeah, well, I, I, I think that some listeners would probably think, huh, are lawmakers really playing chess right now with respect to AI regulation? Or is it maybe something akin to dodgeball or something a little bit more sinister and maybe less intellectually driven? But I'll leave that for a different discourse.
But I wanna zero in, in particular, on how you kind of work with other parts of OpenAI to get a sense of what those key values are and key determinants are of what is the right policy to make sure users quote unquote do the right thing, right. And also how do you engage specifically with an external stakeholders in that process, because as well intentioned as lawmakers may be, let's just presume they're playing chess, there is a bit of a regulatory void depending on the jurisdiction, of course, but I think here in the U.S. in particular, no one is really saying, oh, we figured out AI policy, OpenAI, Anthropic, so on and so forth. They know exactly what they need to, to do to adhere to the, the specific rules and goals we have for ai.
So how do you make sense of all the different stakeholders who are involved in some of these decisions?
Brian Fuller: Yeah, okay, so I'll break that question down into two parts, right? There's internal stakeholders and then there's external stakeholders. I'll tackle the external stakeholders piece first. So we have a small group of external policy advisors that we can call on, we also have some people outside the company who do red teaming for us. We like to utilize both groups to get advice from policy experts and also security experts. Now we don't rely on a huge, enormous group of people for this work. And there's a good reason for that, which is if you're doing work for one of the world's most valuable companies, I think you rightly expect to be compensated for your time.
And if you're being called upon a lot to do that kind of advice work, it's a lot easier to just hire that person and bring them in-house 'cause it makes the whole getting paid thing a lot more streamlined, so that's number one. Number two, as far as the like external regulatory environment piece, I guess I have a bit of a confession. I doom scroll political news.
Kevin Frazier: Wow. That is, that is a rough existence, sir. I'm i's sorry for you there.
Brian Fuller: It's, you know, it's a little bit dark right now for some people. Let's not get into that, but this isn't really helped by the fact that we get a lot of newsletters internally about the latest in AI regulations.
So my broader team is just about as committed as I am to being the first to know what's happening, both in Brussels and in the Beltway. And so I'm constantly getting pings from people going, you know, did you hear what Vice President Vance said? Did you, have you heard about the latest EU regulations that have been proposed? It's, it's wild man.
Kevin Frazier: It's a lot to keep track of, I mean, I think that folks in a sort of us bubble, we'll say, oh my gosh, just the regulatory chaos at the state level, at the federal level would alone elicit a sort of frantic mentality of how the heck do I stay on top of all of this? But to your point, you all have to be asking, what's Japan doing? What's happening in the EU? What's China thinking about? Just keeping tabs on all of that seems nearly impossible. And I just want to add to the difficulty of your job, which is to say there's some speculation and I don't want you to address this, and, and no harm, no foul here, that you know, it appears as though OpenAI is seemingly always ready to follow on a product announcement by one of its rivals with its own product announcement.
Just by some miraculous timing, this seems to always happen. And the hunch from maybe some external folks would be, wow, maybe Open AI occasionally has to push faster on releasing a product than intended. In that hypothetical world of having to, to perhaps move very quickly, how do you all balance that sort of competitive pressure to remain on the frontier of whatever's happening with instilling the values underpinning your product policy work?
Brian Fuller: Yeah, so I, you said that I didn't have to address this, but I'm gonna do it anyway. Okay, so I think, I think you're touching on something that a lot of commentators touch on, which is that there's this impression that AI companies are competing against each other and are deeply reactive to what's happening in another lab.
That I think is true for some of the companies in the field, but I have not seen that at OpenAI, I, I genuinely have not. Like we are interested in what are in what other labs are doing, not because we feel this like frantic need to murder them on the field of economic combat, but rather because we're genuinely excited by the technology and the ways it's advancing.
So if someone else is doing something really cool, we're gonna look at that, not because we feel like we have to copy them, but because it's just really cool. Like, you know, look, I, I think it's the difference between being motivated by money and being motivated by advancing the state of the art, advancing scientific progress.
I'm like, I'm not talking smack about anybody, like I need money to buy my groceries just like anybody else, but I think that commentators like framing this as a cutthroat game of technocrat, skullduggery or something since it, like, it plays better in the news, it's more exciting.
Kevin Frazier: Anytime I see a headline with Skullduggery, that's, I'm definitely clicking that.
Brian Fuller: Yeah, that's a, yeah, that's a, that's a, a technical term in the policy world of course. You know, in reality we're just over here trying to like, make some cool stuff.
Kevin Frazier: Right, right. Well, I will say you do make a lot of very cool stuff and sometimes it doesn't always work as intended. I think listeners will be well aware of a particular model, perhaps having some sycophantic tendencies that no one wanted to see, and yet it was released into the wild being very appeasing and celebrating all of my chillingly good questions that I was asking, and it did make me feel pretty good for a while, but how does something like that happen?
We've seen a postmortem on the technical side exploring what may have gone wrong with Chat GPT-4 and those sycophantic tendencies from a product policy perspective. Who dropped the ball, Brian? What, what happened? What went wrong?
Brian Fuller: Look, I, I think this is a really tough one, right? Because, okay, you and I are in conversation right now. As a good conversationalist you want to make a sign that you understand and empathize with the person to whom you're communicating. Like people do this in a lot of really important but subtle ways where we are responding to what the other person says and there are both like verbal and nonverbal cues. With an AI, they aren't embodied, so you're not getting the same nonverbal cues, everything has to be explicit. And so one of the things that, that you could do to try to help people feel more understood and being understood is deeply important to having a meaningful conversation, whether it's with a human or with a non-human artificial intelligence. One of the ways you can do that is to affirm what the other person has told you because it shows both that you've understood that you empathize, and that you're on the same wavelength.
I, I do think that you're right that as has been reported in the news, it is possible to air too far on the side of showing that kind of affirmation. And, you know, when the company became aware of the issue, everyone immediately started working on a, on a solution to that problem, and I think they did a great job. So, you know, I, I, I just think it's important to, to bear in mind that we are on the forefront, the bleeding edge of this tech and we're gonna have to make some minor adjustments as we go along.
Kevin Frazier: So speaking of breakthrough tech we are talking in late July, and word on the street is that Chat GPT-5 may be around the corner and everyone is getting very excited about what that may mean. We heard Sam your boss recently go on a podcast and speak about how he was floored by a response that some new model was able to provide. And so everyone's getting ready for that moment. And I wonder if you can walk us through what is the actual level of engagement your team has with the AI developers, with the engineers from the sort of ideation of, oh, let's start this training process, or let's start this pre-training process, through deployment. What's that actually look like? Are you all, you know, in a chair eavesdropping on every line of code that someone's generating and thinking through? Or are you just kind of like someone everyone has to high five you in the hall before they release something and that's the check mark? What's it look like?
Brian Fuller: Yeah, that's a great question. So you could put me in front of lines of code and that would not help you or me. Like I went to law school, man, just doing my best over here.
Kevin Frazier: We're better at some stuff, but we're very bad at a lot of things.
Brian Fuller: It's true. If I, if I could code as well as the other engineers around here, I would not be on this podcast right now, let me tell you. I would be doing some other stuff, look, okay. Alright, so here's how this works and I, I, I obviously can't speak to any specific GPT whether that's a five, a seven and a half, whatever it is. But here's how this works. So when teams get an idea for a product that they want to build, they'll start off by like sketching out the product's expected features, they'll create some images of what they expect the product to look like. They then pass all that along to my organization, and then we pull in a bunch of different stakeholders from, from across the company and we all sit down and we have an extremely in depth brainstorming session or sessions where we think of all the things that could go wrong, like all the outcomes that we want to avoid. We make a big matrix of all these risks, and then we go through the process of proposing solutions that we think would effectively eliminate or at least significantly reduce all of those risks. When we get buy-in from all the folks across the company, including the people who are actually building the product, we then have to ensure that the, that the solutions that we've proposed are in line with the vision for the product that the team wants to build. So it has to be both, you know, in line with the vision and also has to be feasible with the resources that we have. Once the mitigations are in place, we then test them to ensure that they're all working and I've never seen this process not work as it's supposed to because the folks who are conducting these tests are really clear-eyed about the consequences of failure. Like, you can trust me on this. No one wants the bad outcomes. In the event of a failed test, we, you know, roll back and iterate.
Kevin Frazier: Yeah, let's, to pause there for a second, because a hot topic among the AI safety and AI policy community generally since maybe 2024, early 2023, has been the need for greater whistleblower protections for workers in AI labs out of a fear that perhaps folks aren't willing to step forward when they think, huh, this plan mitigation actually isn't going to address the underlying risk that we, we identified. So if from your vantage point you would say that the process is adequate here in terms of being able to express that concerns, knowing that everyone is considering the same values, the same thresholds that need to be checked off.
Brian Fuller: I'll, I'll, I'll first answer your actual question and then I'll jump into like debate the premise.
Kevin Frazier: Love it.
Brian Fuller: So the actual answer to the question is yes, this process is working well. I have never seen anyone sandbag either the tests themselves or the proposed mitigations that would effectively de-risk a product.
Kevin Frazier: Sandbagging here saying, providing kind of false results or suggesting that a, a mitigation has worked when in fact it hasn't,
Brian Fuller: Right. Or I mean, so typically the way this, this would work if you were gonna sandbag wouldn't be just to falsify data, it would instead be to construct the test in a way that it comes to a presupposed conclusion. And then when you reach the conclusion, you just go, aha, look, we tested it. Ah, behold.
Kevin Frazier: Yay.
Brian Fuller: You were right. I think it's really important though to return to your questions framing that you not have everyone aligned on the same set of values. Like I think it is actually really important to have a diversity of opinion because sometimes values directly trade off against each other, and so like, I'll give you the classic example. Imagine a society in which you are trying to balance out safety and privacy considerations. And for your listeners who are in law school or have gone to law school, this is gonna be deeply familiar. So, imagine a world where there are police officers that follow you everywhere that you go when you go to sleep at night, they are standing by your bedside. When you go to the bathroom in a public restroom, they are there in the stall. You are covered, you're covered. But imagine so like think about this. You are completely safe. No one can harm you, you also have no privacy. Conversely, you can imagine the flip side, right? Where like police officers can't even talk to you, they can't read your email, they can never enter your home, they have no power to investigate anything. You'd have a lot more privacy than you do now in the world in which we inhabit, but you'd also be a lot less safe. And so you actually do need people on both sides of the safety and privacy debate in order to come up with an outcome that is reasonable, that doesn't just air on one side too far and end up in a world where it looks a little bit dystopian.
Kevin Frazier: Well, and to explore that further too. You and I both get to call Austin home, which is excellent, and we get our breakfast tacos and we hang out at Zilker Brewing for folks who wanna know where the best beer is, but that comes with a unique set of values and a specific set of values. And a lot of your colleagues work in San Francisco, and a lot of them are New York or so on and so forth, all of which have specific value sets and at the same time somewhat homogenous value sets.
And yet the user base of Chat GPT alone is pretty much the entire global community at this point. I mean, the adoption and diffusion of AI is off the charts literally when compared to other technologies, so how are you exploring some of those value questions with respect to communities that often haven't had a seat at the table, given that releasing chat, GPT 5, 6, 7, 8, 9, or 10 is definitely going to impact their communities as well as those in the states?
Brian Fuller: No, you're, you are absolutely right. Like you can't, so okay, OpenAI's mission is to develop artificial general intelligence that benefits all of humanity. The all of humanity is really important. So OpenAI takes a truly global approach to creating policies and advising teams. And so when we start doing the policy work for any given product, we take a global view of the regulatory landscape. To your point about engaging with the communities themselves, like we have been doing this, I think much more so than many of the other labs, so OpenAI just sent out a large delegation of folks over to Kenya. We're doing OpenAI for countries like we announced this partnership with the UAE not that long ago, like we are, we're truly taking a global approach to this issue because to your point, yeah, this is gonna be an AI world that we're gonna live in, all of us together on this planet.
Kevin Frazier: Back for a second to the role of your team in product development, how far does that reach, does that reach down to the conditions, for example, of data labelers in Kenya, for example, or in, in countries around the world who are having a very real role in influencing data inputs and other aspects of the tech stack?
Brian Fuller: Can you explain that a little bit more?
Kevin Frazier: Yeah, so there have been reports, for example, starting probably early in 2023 about some of the less than ideal and I'm just gonna go ahead and say egregious working conditions of individuals who are helping label training data and assist with reinforcement learning and those folks were getting paid horrible wages, not working in great environments, does the product policy team factor in that full range of AI development, or are you specifically focused on what's kind of directly happening in-house at OpenAI?
Brian Fuller: Yeah, so my team advises the product group that is developing each product, we also control the usage policies as I mentioned. We are not ultimately the team that advises on worker conditions for any specific vendor, although I will say that I spent. 12 years working at Meta before coming to OpenAI, so I'm very familiar with and in like mind as you about the conditions that the people who do a lot of this labeling are asked to, to operate within. I have not seen those same criticisms levied at OpenAI specifically, but if I am, if I'm misinformed and that, that there have been specific allegations, please like, let me know.
Kevin Frazier: Yeah. And so also to hit on some of the risks that you all are discussing at the policy table you mentioned that everyone's kind of throwing risks out there, throwing mitigations out there, and trying to get a sense of how are we going to release this product in the base best way possible?
What are some of the risks that are top of mind for you right now? I mean, obviously folks will continue to come up with both creative and destructive use cases of AI, wxhat rises to the fore right now? Is it, is it just the continued creation of Studio Ghibli memes ex, ex, expending your GPU or is it you know, the development of bio weapons somewhere in between? What's rising to the top of the agenda?
Brian Fuller: Yeah, so I think we've seen a bit of a shift. Over the past couple years, like when I first began doing AI policy work, people were mainly concerned about toxic model responses. Like folks got really bent outta shape about perceived political bias and text outputs, like Google had to revamp an entire image generation model 'cause it was creating images that weren't historically accurate.
People were also really worried about like more existential kinds of risks, like models producing information on how to make bioweapons, but those risks weren't particularly pressing because the models just weren't good enough yet in those like existential areas. I think that's all changing now, like models are getting really good at hypothesizing about ways that people could harm each other, whether through bioweapons development or otherwise, and we need to stay, start taking these kinds of risks, I think more seriously.
And we need to frankly stop I think getting as bent out of shape when a model produces a response that like has a naughty word in it, or when a model seems to have some kind of a political viewpoint like I think we need to start thinking about this question, and this is the thing that keeps me up at night, frankly, which is what happens when an AI lab, maybe it's in China, releases an AI model that just doesn't have great safeguards when it comes to something like bioweapons development, like when there's a model that exists in the world, which is both highly performant and willing and able to help people create more virulent strains of smallpox or something, what do we do? Like we've just lost access to the ability to restrict bio weapons development through controlling access to relevant scientific information. So there are like two big prongs of how to stop bad people from doing bad things with nuclear bio or chem, right? And it's, you need to know how to do a thing and you need to have the machines, the actual items necessary to put that knowledge into practice.
And with bio weapons, it's particularly spooky in my view because so much of it is not machine dependent. Like there is a lot that is, but the machines to do this work are not that inaccessible. The information presently is way more difficult to obtain, and I think we're moving into a world where that information may be more accessible, it's not gonna be because of OpenAI. Like I know what we are doing inside, and my God, we're taking this seriously, but what if it wasn't to OpenAI and we had a lab that was doing this work and it was based in China or Russia.
Kevin Frazier: And so with respect to these seaburn risks, as they're oftentimes referred to chemical, radiological or excuse me, chemical biological, radiological and nuclear, you're mentioning that this could be a lab elsewhere that fails to adequately consider those risks and prevent the release of that information to bad actors. What's the role of open AI here though, in terms of is it helping support allies or perhaps even the U.S. government explicitly respond, or are you all going to then snap into sort of like Ironman mode and develop a counter attacking AI? You know, how does this factor into OpenAI's mission?
Brian Fuller: Yeah, so I think that the job of OpenAI, is, is not to produce a paramilitary force that goes in to destroy the servers of the competitor in China or Russia.
Kevin Frazier: Brian, I've gotta ask these questions. I gotta,
Brian Fuller: I know, I know.
Kevin Frazier: I gotta, Go down every route. Yeah,
Brian Fuller: I mean, it's a, it's a, it's a controversial viewpoint I know that private companies should not have their own militaries.
I'm joking, I'm joking. What I do think the role of OpenAI is, is to set standards by which other companies can be held. Like how do you know as the United States government when another lab is operating in a way that is fundamentally unsafe? Like open AI can tell you, hey, we're worried about this lab because they are producing this model it has these demonstrated capabilities and we think that that's bad, but ultimately it's just AI's viewpoint. Like we need to set an industry consensus around what safe looks like within this arena, and we need to have an expert group that we can call upon to validate when we are worried about something. So I think it's really about setting standards and ensuring that everyone is on the same page so that when someone goes off the rails a bit, there is a metric by which you can judge how far they have gone off the rails.
Kevin Frazier: So we saw that seaburn risks as well as concerns around malicious exfiltration of model weights, for example, or unintended access to data centers, all of these very real, very potentially harmful risks were addressed in the AI Action Plan, but I wonder more generally, if you think that concern around seaburn risks is overinflated, underinflated, underappreciated excessively worried about. Where are we on the spectrum from a, a policy standpoint, and you don't need to name names, if you wanna name senators, if you wanna name representatives or groups, go for it, but at a high level where do you think we are in our discourse in terms of balancing on the one hand, the very real possibility of some of these tail risks manifesting and causing severe and irreversible harm, and on the other hand, the possibility that AI could cure cancer allow everyone to become literate, and create just 20 to 30% increase in GDP tomorrow.
Brian Fuller: Okay, well I do think that AI will be extraordinarily helpful in curing cancer. I'll start there because that
Kevin Frazier: Good, good. We'll take that. That's a good win.
Brian Fuller: So, you know, 30% GDP gain in 24 hours, that feels great, but I don't know that that's gonna be totally workable.
Kevin Frazier: Dang, I gotta change some plans, but, okay.
Brian Fuller: Also, thank you for the invitation to smack talk specific policy makers. Shockingly, maybe, I'm not gonna do that. That doesn't seem like a great approach.
Kevin Frazier: Fair enough
Brian Fuller: to gaining consensus
Kevin Frazier: Fair enough.
Brian Fuller: Yeah. But no, I, I, I think that you are touching upon a question that is really important, which is are people taking things seriously with regard to existential risks. And I think it kind of like goes back to how I started the answer to this question, which is that if you had asked me the same thing two years ago, I would've said no, people are way too focused on the nau, the naughty thing that Grock said yesterday, right? People are, I, I think people are slowly getting of the opinion that it's actually kind of fun when AI is a little bit raunchy, like just a little bit like there is a line that exists.
Kevin Frazier: You want that friend in the group, right? You want someone who mixes it up A little spice,
Brian Fuller: Right, yeah. Like I, I think it's important that people choose what kind of AI they want to engage with, and that if they're gonna talk to an AI that's raunchy, that they know that they're talking to a raunchy AI. But I do think that bigger picture, I do think that people are moving more toward focusing on existential risks as the, as the main vector of potential harm and I think that the concern around this topic is frankly warranted.
Kevin Frazier: So how do you ground yourself in this broader discourse because one of my favorite things about this job is I get to talk with folks across the spectrum on a lot of these AI conversations. We had Sayash Kapoor, author of “AI is Normal Technology” and “AI Snake Oil” on. We've had a number of folks who are definitely way more concerned about those existential risks happening tomorrow than Sayash, for example. How do you, Brian Fuller just expose yourself to this full discourse because having worked for Google, having spent time in the Bay, I am well aware of the fact that the conversations that occur in San Francisco, that occur in New York, that occur in Austin aren't always the conversations that are happening in Omaha or in Helena, Montana. How the heck do you, you kind of ground yourself?
Brian Fuller: Yeah, so I, I think that there's really only one way to do this effectively, and that's to talk to people that are smarter than you. And like, I, I wanna just, yeah.
Kevin Frazier: Yeah, I never have that problem because it's just about every single person.
Brian Fuller: Oh, no, you might not, yeah, you might not. You're a smart guy. I, there are lots of people in the world that are smarter than me. And I think the, the right way to go about this is, okay, here's one of the things that I've learned in my career: It's that if you go into a room and you don't really know what you're talking about, but you try to act as though you do, people are kind enough to both recognize that you're a little bit full of it, but also kind enough to not wound your pride by trying to educate you about something that you're already pretending like you actually know and so what happens is you actually just go through life perpetually ignorant. And that's not ideal, right?
Kevin Frazier: That's a bad, that's a bad thing on your tombstone. Perpetually ignorant is not what you want.
Brian Fuller: No, that's the truth. You know, I gain trust with people by just admitting when I don't know and then asking for help and people, when you come in with that level of humility, they are very willing to educate you and they'll spend a lot of time helping you to like truly understand the topic, which is great because the volume of knowledge that I lack could fill the Library of Congress, especially about artificial intelligence, which is
Kevin Frazier: another, another thing we share in common: We love Austin and we love admitting that we don't know that much
Brian Fuller: Which is, I think it's really helpful. Like I went to law school again, like I, I know just enough about how a neural network operates to know that I essentially know nothing about how a neural network operates, but like this is a hard lesson I think to learn for a lot of people, and I'll give you, this is a short anecdote about me being ignorant and how I kind of learned this. It's one of my favorite anecdotes because it both ties into a, how I'm a bit of a doofus sometimes, but b, how AI and policymaking actually work together. So, okay, about two years ago, I'm working at Meta and I'm the person at Meta who's tasked with writing all of the policies that govern all of Meta's AI models across their various modalities. So it's easy, easy job.
Kevin Frazier: An easy job.
Brian Fuller: Yeah, it's a piece of cake. And someone comes to me and goes, hey, we're we're making this image generation model can you write the policies for it? And I got all excited and I go, yes, I absolutely will do that. And one of the things I focused on right away was the rules around nudity.
Like it's a pretty classic, you know, don't make a naked person kind of AI problem
Kevin Frazier: Usually pops up. Yeah, yeah.
Brian Fuller: Right. Yeah. You know, but meta had been doing. Nudity prevention for a really long time because Facebook and Instagram exist and so people upload nude photos and then we have to figure out like, is that actually nude?
And the rules for nudity in Meta’s like community standards are pretty nuanced, like it talks about like what the person is wearing, how much skin you can see, where the camera is positioned what pose the person is in their like apparent a like it's, it's really complicated. And so I said, you know, let's just make this way simpler guys like let's just, you know, throw out all the work that y'all have done before in the policy space 'cause I'm such a savant, let's just say this, let's say no genitals and boom, and on a woman, no visible nipples. Let's just do that, and I thought, you know, I've solved it, how smart am I? Here's what happened: Within a week, our image generation model started producing images like it had been effectively trained in my new policy, it started producing images of people who were completely naked, head to foot who just inexplicably lacked those specific elements of their anatomy. And so I had basically coached an AI model into just making photorealistic Barbie dolls, which is not actually what we were kind of like looking for.
Kevin Frazier: It was a year of the Barbie though, so, you know, it kind of may have been a, a, a pro rather than a con in some cases, but Brian, I have to admit, I'm a little shocked because all you have to do is go to Barton Springs once here in Austin to see a bunch of people showing exactly what you just foreclosed to realize that, yeah, maybe that policy wasn't gonna work as intended, but tell me more about you being a dufus.
Brian Fuller: Okay, great. I will. So you're right. So a, you're right that nudity exists in the world. And I personally, this is not necessarily the position of OpenAI as a whole, but I'm generally of the position that if you're an adult, you should be able to view and interact with, even nudity, like you're an adult.
It's, it's legal to view nudity. Maybe private companies shouldn't be in the position of censoring legal content for you when you like it and it's not hurting you. But when I was making these policies at Meta, we didn't have a what's called an age gate, which is if you're over 18, this is the model behavior, if you're under 18, this instead is the model behavior. And people are generally of the opinion, I think reasonably that for a developing mind of a young teen, we probably don't want them engaging with nudity. And I think the legal regime that is taking hold reflects that view, but, but yeah, so anyway, we, we came up with this system where we're making nude Barbie dolls and no one wanted this. And so I learned two lessons from this experience,
Kevin Frazier: Except for Ken. Ken was win except for really into this.
Brian Fuller: Yeah, I mean. I mean, yeah, fair enough. So, like, I learned two lessons. The first is, writing policies for AI models is hard, and it's not like, most importantly, it's not like writing policies for organic content because AI models don't have to abide by the laws of physics and human anatomy, they can just, you know, skirt around whatever policy you make to do the thing that, you know, you didn't expect. The second is, I learned it's really important that you embed yourself with the people who know more than you about the way that these models actually operate. And so one of the last things that I did at Meta that I'm one of the most proud of, is I made this system and I, I say I made it, it was really a like group effort it was, it was me, Creighton Davis from Meta's Legal Team, and Chloe Bakalar from Meta's Engineering Organization made this triumvirate system where policymaking was shockingly not entirely owned by the policy organization, but was instead a group effort. And so we came up with outcomes that were way better than anything that I could have made in a silo and so, you know, both of those people, by the way, are two of my closest friends now, so like I really learned this lesson that if you work with other people in a collaborative way and you admit when you don't know stuff, the outcomes that you reach both like professionally and personally, are better.
Kevin Frazier: My wife reminds me of that every day of how if I just asked her more questions then a lot, a lot of things would go better, but we'll leave that for a different podcast. I do think it is hitting on one of my big hobby horses in this debate generally of how do we approach the AI policy question, which is just admit you don't know everything and build that into the actual process itself. So listeners will have probably heard me rant and rave about the need for sunset clauses and retrospective review baked into just about every piece of AI legislation, because if you think you know how AI's gonna unfold in the next six months or two years, or three years, you're in the wrong job, you should probably be making trillions of dollars based off of those predictions rather than legislating or policy making. But Brian, I, I, I think that there are surely some listeners who are saying, my gosh, Brian, what an interesting job, all those stories, how fascinating, how the heck do I end up doing product policy at a major lab? So in a kind of short overview for the folks who are thinking, huh, I wanna get into AI policy, and I perhaps even want to go work for a lab. What's your recommendation? What's your advice?
Brian Fuller: Yeah, okay, so this is a hard one to answer because there isn't really like a traditional path to getting into product policy, especially not AI product policy. I will say that having a law degree is really helpful because you need to be able to think strategically about extraordinarily complicated topics. And I think that one of the things that getting a law degree does is it really does teach you how to be a well-honed critical thinker. And that's a skill that's really useful. But I, okay, I like to go mountaineering, right? Like I'm not an expert mountaineer for anyone who is an expert mountaineer, don't hit me up, I, I'm not gonna go climb in the Karakoram with you.
Kevin Frazier: Boo
Brian Fuller: I'm sorry. I'm not there yet. I know, right? But one of the things that you learn when you go up on a mountain is there are these footprints from people who have climbed up a slope before you and what you wanna do is step in those same footprints because you go, this is gonna be easier. Like when you climb up a snowy slope, you have to kick in with the points of your crampons to grab into the snow and grab a hold, but when you step in people's footsteps, you don't actually have to kick in, you can just step on them. The trick is that, a, you don't actually know where those footprints are going all the time, so you may end up in a place where you actually aren't that happy with being there, and B, those footprints get icy really quick and so you can slip and fall a lot easier than you would if you were kicking in. And so, I think everybody to get to a place in their career where they're gonna be happy, you gotta just kick into the snow, man. You gotta just, you gotta just go for it. I'll tell you, I'll tell you how I learned this lesson.
Okay, this is, this is gonna be short. I promise
Kevin Frazier: I love it, I love it
Brian Fuller: I'm not gonna wax poetic.
Kevin Frazier: Well, I, I just want to pause and say that yeah, I think there needs to be a new bumper sticker who needs lean in if you can have kick in right? So I, I may, you may start seeing these around Austin Kick in. I love it.
Brian Fuller: Okay, well, yeah, I'm definitely not gonna try to compete with Sheryl Sandberg for, for slogans. She's got a great team.
Kevin Frazier: She's got a good team. Some, some PR behind her, but yes, that's right. Let's, let's hear your story.
Brian Fuller: Okay. So like, rewind seven years or so. Okay, I am in the operations organization at Meta and I am not happy. I do not like where my career has landed me. I, I went to law school. I was an IP lawyer for a couple years. I worked for a, a video game company. It was, it was neat. And then I moved over to Meta and I joined an ops role and when I joined, they asked me, you know, can you help design all these IP rules for the notice and take down program because we had one IP lawyer at the time for the entirety of Facebook. And so I was like, ooh, cool, I'm just gonna be super influential in helping to drive notice and take down for the world's largest notice and take down program. Cool, awesome. That lasted, you know, a couple years before they hired an enormous legal team, and then they were like, well, we don't need you ops folks anymore. And so I moved over into doing this role where I was helping advise sales teams who their sales clients would find content on Facebook or Instagram and they wouldn't like it, and they would ask their salespeople, can you get this content removed? So it'd be stuff like, you know, Proctor and Gamble's sales team is contacting us, going like, hey, Proctor and Gamble's about to pull $50 million in ad spend unless you take down this post that criticizes their toothpaste and it was my job.
Kevin Frazier: How dare you insult our soap!
Brian Fuller: Right, exactly. And yeah, and it was my job to go, hey, okay, the policies that exist are not dependent on how much a client spends, like people are allowed to say mean things about toothpaste, guys
Kevin Frazier: You can have hot soap takes whenever you need to.
Brian Fuller: Yeah, exactly, right. And so these salespeople were really intense and I couldn't figure out for the life of me why they were so intense.
And then one day I just asked them like, why are y'all so mad all the time? And they said, well, okay, here's how this is gonna work, you're gonna tell me no, that toothpaste thing isn't coming down. Then I'm gonna have to get on the phone with like Proctor and Gamble and some VP is gonna scream into the cell phone, just ream me out. And then maybe I'll be able to convince them not to cancel their ad campaign that like, actually we do value them and that like the way that we demonstrate that we value them is not by, you know, adhering to their policy requests, but instead by, you know, giving them revenue in exchange for their ad money.
But I had this idea and it was, hey, like no one told me that I could do this, instead, my manager dissuaded me or tried to said like, you know, if, if, if you do this, nothing good can happen for you, and if you do it, only bad things can result, but I sort of volunteering to join those calls with the VPs of outside advertisers. And so I was the, I was the like whipping boy who just got yelled at 'cause I was like, I went to law school, like that was basically my law school experience was just getting yelled at. I
Kevin Frazier: I will say at UT, we don't just yell at our students, but I know at Baylor they have different expectations.
Brian Fuller: The Socratic method is alive and well at Baylor Law School, it produces some really intense trial lawyers mainly 'cause they've been yelled at for three years and they have a high tolerance for pain. And so I was like, hey, I'm good at getting yelled at, like I'll go, I'll do that. And so I, I did, I just got yelled at over and over again and you know, somebody, somebody would yell at me and then by the end of the conversation they would've cooled down just enough for me to tell them like, here's how freedom of speech works and here's why this is like a good thing and then they'd go, oh, okay maybe you're right, or sometimes they wouldn't. But you know, I think more often than not, I managed to walk 'em around. But the policy organization noticed and I was stuck in ops and I wanted to be in policy for a long time and I couldn't get the policy organization to notice me. And so it, it took for me getting yelled at by a whole bunch of highly paid executives to have the policy organization go, hey man, if this guy is just willing to get absolutely pummeled on behalf of our policies, maybe we should just like get him in here so he can at least have some influence over the policies that he's getting pummeled about.
Kevin Frazier: I love it. I love it. Kick in, keep climbing, just breathe, breathe deep and climb up that mountain. That is impressive, sir. Well, I know you have a lot of mountains to climb and a lot of battles to fight in the policy space. So, we'll have to leave it there, but thank you so much for joining Brian.
Brian Fuller: Thanks so much, Kevin, it was a pleasure to be here.
Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcast. Check out our written work at lawfaremedia.org. You can also follow us on X and bluesky and email us at scaling laws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.