Cybersecurity & Tech

Lawfare Daily: Page Hedley and Gad Weiss on OpenAI’s Latest Corporate Governance Pivot

Kevin Frazier, Alan Z. Rozenshtein, Page Hedley, Gad Weiss, Jen Patja
Thursday, May 22, 2025, 7:00 AM
Breaking down OpenAI's planned corporate restructuring. 

Published by The Lawfare Institute
in Cooperation With
Brookings

Page Hedley, Senior Advisor at Forecasting Research Institute and co-author of the Not for Private Gain letter urging state attorneys general to stop OpenAI’s planned restructuring, and Gad Weiss, the Wagner Fellow in Law & Business at NYU Law, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Senior Editor at Lawfare, to analyze news of OpenAI once again modifying its corporate governance structure. The group break down the rationale for the proposed modification, the relevant underlying law, and the significance of corporate governance in shaping the direction of AI development.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Page Hedley: I am not surprised by this. I am surprised that people are not more exercised, given the fact that this is the whole point of those safeguards. This is the moment they were put in place to guard against the, the risks that they're put in place to guard against.

Kevin Frazier: It's The Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at Texas Law, and a contributing editor at Lawfare. I and Alan Rozenshtein, University of Minnesota law professor and Lawfare senior editor, were joined by Page Hedley, senior advisor at Forecasting Research Institute and co-author of the Not for Private Gain letter urging state attorneys general to stop OpenAI's planned restructuring, and Gad Weiss, the Wagner Fellow in Law and Business at NYU Law.

Gad Weiss: If investors come up with a structure that allows OpenAI to attract capital and talent on better terms and be a stronger competitor, which will allow OpenAI to build AGI in a way that it would've been able to do otherwise, society could also benefit from that.

Kevin Frazier: Today we're talking about the twists and turns of OpenAI's corporate structure. Though many people would likely not put corporate governance near the top of their list when it comes to AI governance, this group makes clear that the legal incentive shaping OpenAI's operations, as well as those of other labs, will have a major impact on AI development.

[Main podcast]

So thinking about the headlines we've been reading for OpenAI and its changing governance structures throughout the years, starting way back as a nonprofit, then pivoting to this sort of hybrid structure, and now allegedly and reportedly considering a for-profit structure, we need to kind of get back to basics and understand what are the different pros and cons of various corporate governance structures.

So Gad, can you just start with an overview of the nonprofit structure? What's the traditional rationale for opting for that approach over some of the alternatives?

Gad Weiss: Right. So, thank you Kevin. Trying to avoid too much of a business association is one of one answer. We'll go over this very quickly. So when you want to start a new business and you can choose different kinds of business entities, one of the questions you have to ask yourself is how free you will be to promote a social purpose or a charitable purpose over the interest of other financially motivated stakeholders that you might have.

So one option that you have is to go for the complete pure nonprofit structure where you have an entity that might be funded by donations from third parties, but these donors cannot expect to make, should not expect that business to be run to promote their financial motives, right? A nonprofit should be run to promote a certain charitable purpose.

As we go down the scale, we can choose different kinds of business entities where that social or charitable purpose can be balanced with stakeholders’ financial motives, and we can think of the public benefit corporation, which is also highly relevant both in the context of OpenAI and other business entities. So these are entities that in which managers, directors are required to balance between stakeholders’ financial interests, and a social purpose defined in that entity's charter.

And on the complete other end of the spectrum, you have the regular plain vanilla corporation where you have a whole discussion about what does that structure actually require in terms of how much weight should you give to maximizing shareholder value as opposed to broader stakeholder values, but certainly the not does not give you the same kind of freedom to put the social or charitable purpose above all else as you could have in a nonprofit structure.

Alan Rozenshtein: So Page, we're gonna get to you about the OpenAI specific history here, but I do wanna ask one follow-up question of, of Gad first, which is, can you talk through the implications of changing from one corporate structure to another, right? Specifically when a nonprofit tries to go and become a for-profit, either a regular for-profit or a public benefit corporation, why is that such a big deal?

Because at least as I was reading the, the, the, the news reports and commentary there, there was this real concern, at least among some critics of this, this was a massive, they almost talked about it as a theft of assets, and I, I was really trying to understand why is it so profound and why are there such complicated rules? What is at stake when you go from being a nonprofit to being a a for-profit, and, and why is that potentially maybe more controversial than going from one for-profit form to another for-profit form?

Page Hedley: So, so just to add some gloss on what Gad said, under the nonprofit structure, it's, it's not just that the company can pursue its charitable purpose above shareholders' interests, but it, it must. It's legally required to, and if it doesn't, it can be sued, it can be brought to court by the attorney general. That is a big deal. There's a clear primacy of the charitable purpose above all else. So that is the deal with the nonprofit.

If you switch to even a public benefit corporation, that is very different. So under the public benefit corporation, the company is permitted to consider a public benefit goal in addition to shareholder returns, but it is not required to, it's required only to do some vague balancing. There's literally never been a case of any public benefit corporation being held to account for that. In part, that's because the attorneys general can't sue. The only party that can sue are shareholders and how often are shareholders going to sue because the company has favored them too much relative to the public benefit mission?

So that by itself is a pretty big change for people who think the charitable purpose is important because what it means is to the extent the interests of shareholders and the public diverge here—and that is the premise of OpenAI’s founding, that these interests are going to be very different in the case of a transformative technology like artificial general intelligence—if they're different, then reallocating the organization's interest from the public to the shareholders either endangers the public because they're incurring risks they wouldn't have otherwise incurred, or they're literally reallocating expected wealth and power from the public to the shareholders. So if you believe in the importance of the charitable purpose and also just protecting what's owed to the public you should be pretty exercised by, by this plan.

Alan Rozenshtein: And so Gad, back, back to you then, sort of more generally then, how, how do these non-profit to for-profit conversions tend, tend to work? Sort of what, what has to happen? Because my understanding is that this does sometimes happen though.

Gad Weiss: So I think a main purpose to consider here is that you, when you have leadership of a nonprofit considering a switch to a for-profit structure, at the moment that they make this decision, they're still a nonprofit. The decision to switch to, the reasoning behind it should somehow serve a charitable purpose. And you have to take into, into consideration that if you did raise funds from donors they did that based on a certain expectation and understanding that they are contributing to a nonprofit and a for-profit business. So these are all things that the management of a nonprofit should bear in mind when they're considering this kind of restructuring, which in some instances could make sense. It could promote the charitable purpose.

Page Hedley: Yes. Just to piggyback on that, I think the the example of organizations doing this, that is the relevant precedent here is the hospital conversion cases, and without talking about the specifics of those there, there are a lot of goals that would be more effectively pursued in a for-profit structure. If your goal was to make awesome smartphones at cheap prices, don't incorporate as a nonprofit. If you, if you did that by mistake and then you realize, oh my gosh, there's this incredible market for this, you should switch, and switching would be consistent with your charitable purpose.

So that, that is possible provided switching advances your charitable purpose. That is the key question. And so, the, the fair market value question that I think many people have been asking I think is the, is the wrong question. It, it might be relevant in some cases, but ultimately the question is, does this change in structure, advance, or undermine the charitable purpose?

Kevin Frazier: And with that key purpose in mind, Page, in your letter, the Not for Private Gain letter signed by yourself and a number of former employees of OpenAI, a number of academics, a number of researchers, you all go into great detail about the original understanding and mission OpenAI had when it decided to choose this nonprofit structure.

So can you walk us through why, given the known costs of developing a massive AI model, given the amount of financial capital you need, and some of the traditional incentives that you would think about for creating a large language model or a large AI system, why opt for this nonprofit structure? And in particular, if you could dive into the weeds of what was that explicit charitable purpose that OpenAI initially adopted?

Page Hedley: Yeah, great. So at the time, OpenAI was founded, the only company explicitly trying to build artificial general intelligence—which for those not familiar with it, you know, could, could be the most impactful technology humans ever create, this is certainly the view of OpenAI’s leadership—was Google.

And this was very concerning to OpenAI’s founders, including Sam Altman and Elon Musk and Greg Brockman, in part because they thought that the interests of a company and its shareholders was very much not the same as the interest of the public, and this was too important of a technology, too big of a deal for it to be under the ownership of a company like Google. So, they founded a nonprofit.

In 2015, it was very unclear how we would actually get to AGI, as we call it. This was before ChatGPT; this was before the first paper about GPTs being a paradigm that might end up yielding the kind of results it's been yielding for, for many years. And it was, it was unclear that it would be so expensive to build this technology.

So OpenAI knew it needed a lot of capital to be at the cutting edge, because being at the cutting edge is an important part of its, of its mission. It could not do what it wanted to do by being an academic think tank, by doing research. The donors who supported it could have written checks, were writing checks to academics. What OpenAI offered to do that no one else was doing was be in the game, be a player, have the cutting edge model so they could figure out how to make them safe so they could actually steer the policy and governance conversations to ensure this technology is developed in a way that that benefits the public.

A couple years later, it turns out that building this tech is even more expensive than they thought, and I think that wasn't obviously foreseeable. They, they had a lot of capital. They raised $130 million from, from donors. They could have kept raising money at, at that level. That was plausibly enough for them to be the cutting edge, but it turned out it wasn't. They, they needed extraordinary amounts of capital.

Kevin Frazier: Yeah. Just to pause there—Gad, if you could dive into how that issue around the incentives of a nonprofit structure also may impose a sort of cap on your ability to stay at the leading edge of AI development. In generally, when we're thinking about nonprofits in a sort of abstract manner, if you could go really into the weeds of why that may not be the proper vehicle for staying on the edge of something as capital intensive as AI development.

Gad Weiss: So if you're going to develop AGI, there are two main kind of resources that you need. One of them is financial capital. It is very expensive to develop this kind of technology. The second is human capital. You need the best and brightest on your team. You need people like Page to help you build it.

And in order to try to attract capital and talent on the best possible terms, you need to offer the right incentives for them to join you. If you are going to go and track and try to raise money from investors, you need to offer to them a business structure that makes sense, and when startup investors think about investing in startups, the only way for these investments to make sense is for them to have an ability to rake in disproportional profits from the few good investments they make because we know these sort of investments fail in famously high rates.

And when you try to attract the best kind of employees to your startup, you need to always consider the alternative. These are people that can go work for big tech companies or for the other hot startups, so we need to be able to offer them the kind of compensation that would attract them to come and we would, would incentivize them to do a good job and to stay for the start, for the longer term. A major aspect of that compensation package is your ability to give them equity compensation and the kind of equity compensation that has a significant upside potential that can really allow them to participate in the startup success.

All of this, these are structures that are either impossible or complex to create under a nonprofit structure.

Kevin Frazier: Great. So we, we know these key limitations of a nonprofit approach, especially with respect to the needs, the essentials of AI development.

So, Paige, you mentioned that we started to see that despite substantial donations coming into OpenAI, we needed to maybe rethink how OpenAI was going to be structured, and this all came to a head in 2019.

So what were the, the key factors where suddenly we saw this pivot? Maybe a traditional nonprofit is we're reaching the end of the road with respect to that vehicle for achieving our goals. What was the rationale for moving to a new structure and what did that structure look like?

Page Hedley: So it's a balance. I think there's a, a quote from, from Sam Altman—I don't recall the specifics—we have in our letter about, you know, the nonprofit gives them some of what they, what they need, the, the, like, the for-profit structure gives them some, some, some other parts of what they need, but they go all the way in that direction, you know, they would end up creating the problem they tried to solve, so they, they wanted to, to strike this, this balance, this grand compromise. And this was the, the 2019 restructuring. And on paper, I think it's, it's pretty reasonable and, and happy to go over the, the, the basic components of it.

So OpenAI restructured from a normal nonprofit relying on donors—and donors would receive no returns on their donation—to a hybrid capped for-profit. And what that means is they had a, a for-profit vehicle, an LLC that offered investors a return with a caveat—I'll come back to that caveat in a sec. And the LLC had an unusual operating agreement. The operating agreement stated that it's primary fiduciary duty, its primary objective was the charitable mission, and not just the charitable mission, also the OpenAI charter which is an operationalized version of the charitable mission, so it's not quite so vague, and there are some important provisions in there. And that everybody, investors and employees, you know, before they joined, before they invested, had to basically sign a document acknowledging that this was the constitution of OpenAI's commercial entity.

That wouldn't be enough because that wouldn't be legally enforceable by the attorneys general, so on top of that, the, the manager of the LLC, the, the entity with the authority to run day-to-day operations, complete exclusive authority is the nonprofit. So the nonprofit and the LLC have the same duty to put the mission first, and the attorneys general have jurisdiction over what the nonprofit does. And so if they fail to discharge their fiduciary duties, the AGs can step in.  So that is what allows the current structure to put the mission first, and for that to be legally enforceable, which I think is like the most important part of its current structure.

A couple things on top of that that are important. One, I mentioned that there's a caveat around how investments work. They're capped, and the reason they're capped is because OpenAI believes that AGI could generate extraordinary amounts of wealth, more, more wealth than any technology humans have ever created. And they don't need investors to be incentivized by that. Like pretty ordinary returns would be enough for them to raise the capital to, to be at the cutting edge.

So they just need to create enough incentive for people to invest in their operations without promising some extraordinary, unprecedented level of future wealth to these shareholders. So what they said is you'll get a, a cap on returns. For initial investors, it was a hundred X, which is extraordinary, right? That's, it's certainly high enough for people to feel pretty good about this investment.

Despite that, investors are, are now unhappy about that cap. And so some people are skeptical that a hundred X is a meaningful limit at all. Well, if the investors are unhappy about it, you know, that should be, you know, some, some reason to update your views about whether this is a meaningful limitation. So currently any profits above that cap belong to the nonprofit to benefit the public. Under the, the new plan, it would all just go to to shareholders.

And the, the last one is currently you have partners like Microsoft. Part of the deal is they have rights to the intellectual property that OpenAI creates, but again, this is all part of this grand compromise just so they can get to the goal. And once they get to the goal, the goal, according to their founding documents, is to use this technology to benefit the world.

So they said, Microsoft, you, you don't get access to artificial general intelligence. You get access to the pre AGI tech. But once we get there, no one has any right to it. It belongs to the nonprofit. That means they can give it to the U.S. government, the consortium of nonprofits ,use it to, you know, so scientists can solve, I don't know, mortality. That would also be gone.

So all all of these things—the legally enforceable primacy of the charitable purpose, the profit caps, and who actually owns, controls AGI—are in jeopardy under this restructuring.

Kevin Frazier: And what stands out to me, as you note in your Not for Private Gain letter page, is that again, the mission here that I think has gotten lost in all of the discourse around OpenAI's various pivots wasn't necessarily to be the first to AGI, but to assist humanity with the adoption of AGI. So it wasn't necessarily, we have to be the first and only, but okay, if someone else is first, let's make sure we can help out with this adoption by society writ large.

So Gad, I wanna come to you for this new hybrid structure to get an understanding of what may be some assumptions that OpenAI had about how that structure was going to work, and how it might not work in practice, right? So you have this idea of a new hybrid system that maintains the charitable purpose of the nonprofit, but there's a reason, or you tell me, there's usually an understanding of why we don't see this hybrid structure copy and pasted in, in most contexts. So what were some of those assumptions that you think OpenAI or some other lab may have held when they were adopting this unique approach?

Gad Weiss: Well, I, I think the question of what OpenAI had in mind while adopting this structure kind of depends on how cynical you are about this whole project. But I think that an important aspect to consider here is that we're discussing about the, the capped profit structure and about who controls OpenAI, whether exactly the relationship are between the nonprofit level and the for-profit level, and what kind of interest investors hold or the founders hold. We need to consider that in a company like OpenAI, the governance structure has another important aspect that is not built out of board seats or formal control rights. There's a whole other layer of informal control.

So OpenAI is not an early stage startup, but it resembles an early stage startup in the sense that it is highly dependent on both investor capital and on one of its founders, in this case, Sam Altman, which is essentially inseparable from OpenAI, as OpenAI's board had to learn the hard way at the end of 2023, you can't really separate the two. Sam Altman is OpenAI in a sense, and OpenAI is Sam Altman.

Alan Rozenshtein: And, and, and just to explain, just for those who are, may not remember, this is when, in November 2023, the board tried to, I mean, it did in fact fire Sam Altman. The details around that seemed to be continuously extremely opaque as to why they did that. It seemed to be concerns about Altman's honesty, candor basically, and their, his willingness to tell them things. And then the board really backed off when there was, it seemed to be a total full scale revolt from OpenAI's employees and its outside partners.

And I'm just curious sort of what that tells you, if anything, about whether this structure worked as it was intended to, that whole saga.

Gad Weiss: I, I think what the, that this saga tells you is that there's a reality you have to acknowledge and the reality is that OpenAI's investors and Sam Altman will hold a certain extent of control over OpenAI, regardless of how you allocate the board seats or how you structure its governing documents. That's just a reality and you have to face it. If you want to build AGI—and the first part of OpenAI's mission is to build AGI, right, it's building AGI that will benefit society, but the first part is building it. And if you are going to do that, you have to acknowledge certain realities.

Now does that indicate that OpenAI's structure is not working or working exactly as planned? That depends on how cynical you are about this project, which, which is what I said. So one way to think about that is that we try to have the nonprofit board have complete control over OpenAI, but obviously that doesn't happen, and another way to think about that is we didn't really want that to happen at all. We acknowledged that, that there are informal, informal control structures, and by placing all formal control with the nonprofit, we tried to create some sort of balance.

Page Hedley: Just to jump in, just going back to what Kevin said about the, the mission, and to put a gloss on what Gad said, OpenAI’s mission is not to build AGI, very clearly. Its mission is to ensure it is built safely and for the benefit of humanity. It states in its charter that one way we'll try to do that is to build it 'cause who's better placed to ensure that goes well than the company building it, but it would consider its mission accomplished if someone else builds it. And there are specific risks to racing to build this tech that can make the situation more dangerous, and that's why the charter actually commits under some of those circumstances to stop competing and basically fold up shop and assist another organization in order to achieve that same objective. I think this is a very important distinction that people have not sufficiently focused on.

Kevin Frazier: And Page sticking with you for a second there, we saw this structure set up, it was unique, it was a hybrid structure, not replicated by many entities; you have a bunch of board members who are in this confusing world of at once managing a nonprofit's mission while also trying to maximize profits of this LLC that you're kind of stewarding, and so weird incentives going on there.

And at the same time, we've seen the level of competition domestically increase in terms of who are the leading AI labs. What's their user base? What products are they offering? And then of course, we have things like the DeepSeek moment where we realized perhaps the moat people thought existed between China and the U.S. was way smaller than previously anticipated.

So all this pressure is heating up around AGI and being the first to reach AGI, and then we started to get some rumblings of, hey, maybe this hybrid structure has now outlived its longevity, just like the traditional nonprofit structure outlived its longevity.

So can you walk us through that evolution that OpenAI had, realizing, huh, maybe this structure isn't working, let's consider the for-profit structure, and what alarm bells started to go off as a result of that consideration?

Page Hedley: Yeah. So first, of course, you know, I was there when they were converting from the nonprofit to the, the current hybrid structure. I have not been there as they've discussed converting to the, you know, a complete for-profit, so I can only speculate.

But I'll say that, you know, when we were discussing what the world would look like, what OpenAI’s incentives would be if it were successful building AGI, this is what we envisioned. These are the temptations that we were trying to plan around, the governance safeguards we discussed, you know, the mission comes first, legally enforceable, etc. That is OpenAI's effort in 2019 to tie itself to the mast. It, it was predictable that when it got close enough to the big prize, the amount of money, the amount of power, the influence from investors, employees, even who have equity interest would be extraordinary. And so absent, like very strong unremovable protections, this is what we would expect.

So I'm, I'm not surprised by this. I am surprised that people are not more exercised, given the fact that this is the whole point of those safeguards. This is the moment they were put in place to guard against, the the risks that they're put in place to guard against.

Alan Rozenshtein: So what, what is it then that you wanted, right, you and your colleagues that signed this letter? Like what were you, what were you hoping for? Just to raise sort of public outrage to, to guilt OpenAI into acting in a different way, to get state attorneys general involved—like what was the specific ask here and, and do you feel like you've succeeded?

Page Hedley: We, we've not succeeded. The audience first and foremost of our letter were the attorneys general of California and Delaware—those are the attorneys general that have jurisdiction here—to hopefully help them see the issues as, as we saw them. And I think we, you know, collectively have perspectives on both the history of OpenAI and the risk from technology and the, the corporate governance and law issues that we thought we could combine to provide an accessible framing of, of the issues.

In part, we're motivated to do this because we've seen so many people talk about fair market value. The, the framing and the media had been, the nonprofit has some economic interest in this for profit entity. The relationship between the nonprofit and for-profit will be severed, well, that nonprofit must be paid well. What is the fair market value of their economic interest?

And I think that is the wrong question because even if they got the fair market value, that would not mean they were advancing their mission by selling their economic interest. And, you know, you can think of lots of silly analogies; if you are a nonprofit with a mission of protecting a specific patch of rainforest, and you sell that patch of rainforest to a lumber company for fair market value, you're not advancing your mission even if the, the sale price was was fair.

Alan Rozenshtein: Let me, so let me just put you on that for a second. Because it occurs to me that if the goal of the nonprofit is to make sure that whoever builds AGI it will be built, or at least to be implemented and adopted into society in a beneficial way—well, if you have a nonprofit, right, that is running a for-profit and that for-profit wants to go and do something right, and the nonprofit is gonna get a trillion dollars in return—I, I'm making up numbers here. But let's say the nonprofit actually gets the proper fair market value, right? And if you're upset about this whole thing, it's because you think that OpenAI might actually succeed, presumably, and so therefore, the fair market value in such a situation would be extremely high.

Well, why can't the nonprofit then say, look, now I have a trillion dollars. Now I'm a pure nonprofit; I'm finally properly, you know, capitalized, and I'm gonna go, and now without the, the distraction, frankly, of having to run a for-profit research lab/social media chatbot company, or, you know, whatever OpenAI wants to become. I'm now going to do this, do the God's work that I was, I was meant to do. Why, why, why, why, what is wrong with that, what I just said?

Page Hedley: I think there are two responses to that.

First is imagine you're a donor considering donating to OpenAI in 2015, and you have this vision, this goal. What you wanna do is invest in frontier AI companies, OpenAI, Google, and then all the returns you get from those investments will, will go to a foundation to be used in the manner you're describing. If the goal were to get a slice of the economic upside to be used to charitable purposes, there are far more direct ways of doing that. That was not the goal of the donors here.

Relatedly on the, on the first thread, one of the main reasons for this structure, for having these guardrails on OpenAI to ensure that it's not just, you know, racing to, to develop this tech for competitive reasons, but taking the, the public's interest into account, is because people believe that companies can proceed dangerously and irresponsibly in ways that endanger the public.

There will be lots of trade-offs when they're deciding how thoroughly to test a model for safety, you know, whether people can create bioweapons when that model is released vs. getting it out quickly before a product launch to raise more money. And those are exactly the context in which you want to have a board with a legal duty to say, no, I understand that the investors want you to, to release this next week, but you need to wait two more months so we can finish the testing. So that's one reason why the, simply investing in the companies wouldn't be enough.

Another more cynical way of thinking about this is if we're just thinking about it in terms of expected returns for the nonprofit, why would the investors want this change? So right now, the, the investors, you know, they have their investment with, with profit caps; they want to change the structure, remove the profit caps; the, the nonprofit gets some amount of shares—the, the exact amount is still TBD, we don't have that amount. If the expected wealth to the nonprofit was higher after the restructuring, why would the investors want this? I think a simple way to think about this is the, the investor's interest and the public's interest are creating some sort of zero sum game here. If, if the investors get something, it comes at the expense of the public, whether that's wealth or risk.

Kevin Frazier: Yeah, and what what stands out to me too, building off of that conversation, is the fact that it's often neglected that corporations, nonprofits—these are state created entities. In the old days, way back when you actually had to get an act of a state legislature authorizing you, for example, to assume the corporate form or to, to have this authorized nonprofit status. And so, these are in many ways a contract. You are a state created entity.

And so as you all point out in your letter and argue forcefully is that by taking this nonprofit structure and proposing to go to the for-profit route, it's essentially an ultra vires, a beyond the law outside of that purpose act, where if you agreed to the state, we are going to exist for this reason, right?

So way back when we had sewage companies, for example, that said we exist to build pipes, and then all of a sudden they wanted to pivot into horse poop pick up, well, hey, actually that wasn't within your explicit purpose, so we're, we're gonna yank your charter, you no longer get to exist.

So I think it's important to really see that this is a specific relationship that OpenAI has with Delaware, that it has with California, which is why the AGs could be the actors that say, no, I'm sorry, you don't get to go that route, this is what we agreed to contractually to allow you to operate.

So Gad, we've talked a lot about for-profit structures and nonprofit structures, and you talked briefly about public benefit corporations. Can you tell us more about how those exist in California and/or Delaware? What's the traditional structure look like and what are some of the, the rationales and pros and cons of that structure?

Gad Weiss: If you can just take a minute before that to respond to some things Page said. So first I don't think we should really view investors' interest and public interest here as a zero sum game—not in terms of wealth and not in terms of risk. I think that from an investor perspective, in many cases, being mindful about AI safety as you build your products will be the right business decision.

And on the other end, I, I also think that if investors come up with a structure that allows Open I to attract capital and talent on better terms and be a stronger competitor, which will allow OpenAI to build AGI in a way that it wouldn't be able to do otherwise society could also benefit from that, and, and so this is not necessarily a zero sum game. And, and also. I think that investors have other legitimate reasons to want to affect these kind of changes in OpenAI's, both economic structure and governance structure that has nothing to do with mission drift, right?

Page Hedley: Yeah, I, I appreciate the question and that's a good, good opportunity for me to clarify what I meant.

Right now, in the current structure, OpenAI is required to put its mission first. That does not mean it can't consider investors' interest. That does not mean it cannot go out and try to raise a bunch of money. That does not mean it cannot make a bunch of products. What it cannot do—because, because there are lots of overlaps and, and what that achieves and, and what the mission is designed to achieve. What they cannot currently do is advance investors' goals when they are very much not in the public's interest. So there's a delta there. All that's at stake. If, if their interests were entirely aligned, we're back to the smartphone case, you know, you, you made a mistake by incorporating as a nonprofit just trying to make money, that's the best way to benefit the public.

So the premise of OpenAI is the public's interest and the shareholders’ are not the same, and shareholders are saying you're doing some compromising, whether in practice or an expectation that we don't like, so we, we wanna shift the balance. So in that context, they are, it is zero sum. The delta is what's at issue here. There's a ton of overlap. That's fine. That's unaffected.

Kevin Frazier: Let's pivot to the current proposal. So Page, can you walk us through, we had these early considerations of perhaps OpenAI is going to go full for profit, they're gonna send it for profit. Instead, they've kind of walked that back theoretically in, in some ways to this public benefit corporation approach. How do you, as a signatory to this Not for Private Gain letter feel about this scaled back proposal in some ways, is it adequate? Is it problematic? What role do you want AGs to play now?

Page Hedley: Yeah, so first I wanna question the premise of the question that this is a scale back at all. So what it was going to do before is convert the LLC into a PBC. That is still the plan. Before, the nonprofit—which is currently the manager of the LLC, has an economic interest in the LLC—it was going to sell that for some unknown price in return for shares in the PBC. That is still the plan.

The only update is OpenAI has said that the nonprofit will have some form of controlling shareholder interest. Most likely that's special voting rights. This is a plan that was already under consideration. There was a Financial Times article from February that speculated that the, that OpenAI might do this for self-interested reasons because it would help Sam and crew fend off hostile takeover bids from Elon Musk and from Microsoft. So my, you know, cynical, best guess is they, they took this thing that they're already considering and they're repackaging it as a major change, as a, as a big concession.

But on the merits, the question is what, what is the import of the nonprofit having voting control, shareholder voting control, of the PBC? And, and the answer is not much by default. There, there might be all kinds of non-standard contractual obligations that they could implement—they haven't shared the details—but by default it, it wouldn't change any of the things that I mentioned being concerned about.

By default, the PBC would still have different duties than the nonprofit. The PBC would have this, this balancing obligation that's not enforceable, that is not affected by whether the nonprofit has, is a controlling shareholder. Their duties are not affected by that. The amount of control of the controlling shareholder is, is very different than the control they had as the manager of the LLC.

It is, it is rumored that they might not have the authority to fire directors, which is the, the only relevant tool as far as I understand, that they might have as a controlling shareholder, and the attorney's general authority to intervene only extends as far as the authority of the nonprofit board.

So if you have a fairly powerless nonprofit board that can appoint directors but cannot fire people, which is the most likely outcome, there's little they could do to actually stop the PBC from completely ignoring the public's interest. And there's virtually nothing the attorneys general could do if the mission is completely jettisoned by the company.

Alan Rozenshtein: So in, in your view, is it the case that the world is not so different today than it was two weeks ago? That like the battle is still very much ongoing, that this is largely a PR move to get people like you—or maybe not people like you, but the broader media to the extent that it has cared about this—to say, okay, I guess this has been resolved, we can move on to something else. But from your perspective, like the battle that you are fighting is still very much being, being waged and none, none then none of this has changed that fundamentally.

Page Hedley: Yeah, I don't think it changes things fundamentally. You know, I wanna be careful about speculating about OpenAI's motives. I think that the headlines have wildly mischaracterized the importance of, of this update. I think the, the fundamental question that people should be asking OpenAI is, you know, right now in the LLCs operating agreement, they state contractually that the charitable mission comes first and the nonprofit has extensive day-to-day complete control as a manager. They could do that by, by contract in the PBC's certificate of incorporation; they have not said they would do that, so until they commit to doing that, I assume they're not going to do that and people should be asking them about that.

Kevin Frazier: And Gad, from your perspective, is this a meaningful change going from the for-profit to a PBC or do you agree with the general assessment that perhaps this isn't as, as substantive as some have been led to believe?

Gad Weiss: I think that the most important thing that we have to keep in mind when we are thinking about a privately held PBC and going back to some of the things that Page said before, defaults don't matter much. Privately held startups are built to a significant extent on all kinds of private ordering solutions.

So even though shareholders in a PBC by default may not have much means to control the way its directors make decisions on AI safety issues, on others, in fact, the way that OpenAI might look like after this incorporation might be very different. And we have today recent changes to Delaware corporate law that has even made it easier for shareholders to take more and more responsibilities that ordinarily would be placed with the board and have to, and to hold more control directly on how a company is managed.

This is one thing we have to keep in mind, but I think that maybe even more important than that, we always have to ask ourselves not only if this new structure allows for a mission drift that the that previous structure did not.

If we're really trying to understand what the risks are here, we also have to ask ourselves two other important questions. First, how likely it is that investors do really have that kind of mission drift in mind—which I think they're good reasons to believe that they don't—and second, how much of this structure really reflect a change from what we currently have in OpenAI today? Also, again, taking into account not only the allocation of formal controls in OpenAI, but also the informal controls that are incredibly powerful—a company like OpenAI, which is again, highly dependent on both investor capital and the active involvement of its founder.

Page Hedley: I mean, so regarding the investor mission drift point, the, the charitable mission would, would not be the investor's mission. It just wouldn't be, they, they would be investors in an LLC with—sorry, PBC—with, with different goals. So it would not be a question of whether they will stay true to the mission. They just have no responsibility to the mission, absent very nonstandard things that they could do in, in the incorporation.

Alan Rozenshtein: Alright. So I, I think one important takeaway from this conversation is that this question over OpenAI's corporate form is still very much an ongoing live question, and we'll have to see for the reasons Paige outlined and, and the reasons you, Gad, just outlined about informal organization, we'll just have to see how this plays out and we will sort of revisit this I'm sure going forward.

The other question that I have had—and I wanna finish our discussion with this because it's really been lingering as I've been thinking about this over the last couple years actually—is why does this matter? And I wanna pose the question in the following way. If OpenAI was the main player in this space, then I could understand the real focus on corporate control. And I generally understand it. I mean, even if OpenAI wasn't important, you know, if you ask people to donate assets, like the rule of law matters, right?

But the stakes really were, I think, in part because there was this perception for a long time that you know, that OpenAI was gonna get to AGI first, or you know, even if that wasn't necessarily OpenAI's mission, they were gonna be one of the leaders here, and obviously they still are.

But you know, as Kevin mentioned earlier, just in the United States, there seems to be enormous competition and Google in particular, which I think is notable because I think it was my sense that Google was in part, the reason why OpenAI was formed has I think really just in the last six months hugely accelerated its progress. I, I think it's fair to say, at least my sense of the industry, is that Google is probably fundamentally in the best position right now, in part 'cause of its deep expertise, its, you know, massive compute advantage; it's one of the few companies that's not as reliant on Nvidia because of its own sort of stockpile of, of, you know, TPUs and its own, custom, custom silicon. It's, it probably has the best data just given, you know what access Google has, right?

And then of course you have Meta, you have X, you have a bunch of other companies, you have the Chinese companies. It just seems very plausible to me that OpenAI is not gonna get to AGI first and that therefore, this just matters a lot less, right, in a world where other people get to AGI.

So, so, why am I wrong, in other words? Right? Why, why, why should we still care about this issue beyond the sort of pure kind of corporate governance rule of law question, which totally, totally accept is important, but that's not the reason the New York Times cares about this, and so that, that's my, that's what I wanna finish with.

Page Hedley: Yeah. Great. Yeah, I think there are three things. The first one is the one you mentioned that you don't want to get into. Arguably it’s a bad precedent; you know, Sam Altman has made commitments to the public in Congress for, for a decade, that should be worth something. Bracketing that.

Second to the point you were saying about there are a lot of competitors, OpenAI isn't necessarily the front runner, who the front runner is changes. By all accounts. OpenAI has been the, or arguably the front runner for, for a while. So they're not just one of many companies, they have a very likely chance of being the first company to develop a technology that is extraordinarily powerful, whether we call that AGI or not. So I think, you know, they are in a, in a select group of companies that we should be paying particular attention to.

But I think the most important reason here is OpenAI’s theory of change—or at least one of its theories of change—was to be a role model organization, was to be in the thick of things, to be a competitor, to be at the cutting edge and take the lead by example on what responsible AGI development looks like.

And we see lots of examples right now because we have no regulation in this space of companies putting out a document, explaining how they'll go about testing their models before they're released, or making transparency commitments or making commitments to give government agencies access to a model before they're released. And what that does is it sets an example. It creates incentives for other companies to follow suit and criticism if they don't. That creates this, you know, race to the top. And OpenAI was supposed to be taking the point, taking the lead on being a role model. It, it is, it is not assuming that role in the way it was designed to assume, and I think even if it's one of many competitors, there would be great value in it actually, assuming that role.

Kevin Frazier: And Gad, any final thoughts you wanna share before we end here?

Gad Weiss: And so I just wanted to say that I completely agree with Page. I think that this is just not how innovation usually works. It is important that OpenAI stays an important player in this competition to reach AGI.

Think, for example, about the market for GUI, right? Xerox was their first, Apple was the first one to package it into a personal computer that many people wanted, and Microsoft, at some point had the advantage of putting on the market, the product that is more friendly for developers and had better compatibility and compatibility.

So I think it's important for us as a society that OpenAI stays in this market and is still competitive and offers a product or technology that offers a different balance between capabilities and AI safety. We can all benefit from that, even if it's not the first one to reach AGI.

Kevin Frazier: Well, the only thing I feel comfortable predicting at this point is that there will be new news in the near future, so let's leave it there and look forward to having you all back soon. Page and Gad, thanks so much for joining.

Page Hedley: Thank you so much for having me.

Gad Weiss: Thank you.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Page Hedley is a senior advisor at the Forecasting Research Institute and co-author of the Not for Private Gain letter.
Gad Weiss is the Wagner Fellow in Law & Business at NYU Law.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare