Congress Cybersecurity & Tech

Why OpenAI’s Corporate Structure Matters to AI Development

Kevin Frazier
Thursday, May 15, 2025, 12:23 PM

OpenAI's potential corporate shift from its “capped-profit” model may conflict with its AGI-for-humanity mission.

OpenAI logo with magnifying glass (Jernej Furman, https://commons.wikimedia.org/wiki/File:OpenAI_logo_with_magnifying_glass_%2852916339167%29.jpg; CC BY 2.0 DEED, https://creativecommons.org/licenses/by/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

A group of AI experts and legal scholars have once again raised concerns to two state attorneys general that OpenAI’s latest proposal to reform its corporate structure runs afoul of the lab’s mission. For many years, that mission has been described in manifold ways but boils down to a core idea: “help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible.” In their recently published letter, the authors warn that OpenAI’s desired corporate structure would suffer from a critical flaw: Key decision-makers would no longer have a “primary fiduciary duty to advance OpenAI’s charitable mission above all else.” This letter, which follows an earlier letter by an even larger set of stakeholders raising similar concerns, and the surrounding debate over OpenAI’s future raise important questions about the adequacy of the nation’s current approach to corporate governance in an age of incredible progress in artificial intelligence (AI).

Typically confined to the backrooms of business deals, corporate law is now taking center stage in directing U.S. AI development. Look no further than OpenAI's journey: Launched as a nonprofit, it then engineered a complex hybrid with a nonprofit overseeing a capped-profit venture (its current structure), and now targets a public benefit corporation model albeit with nonprofit supervision. If this corporate chess game seems bewildering, you're not wrong—the intricacies are legally and technologically significant.

Given the stakes of OpenAI’s pursuit of artificial general intelligence (AGI), confusion and complexities associated with corporate governance cannot stand in the way of thorough analysis of the lab’s seemingly fluid plans. Whether Sam Altman and OpenAI can proceed with that effort depends on how the attorneys general of California and Delaware interpret the lab’s articles of incorporation. More specifically, those officials must determine if Altman’s latest proposal fits within the lab’s long-standing and legally binding mission to prioritize the safe development of AGI. The fact that such a consequential decision rests with two attorneys general exposes a key flaw in applying yesterday’s governance structures to today’s AI. This episode may result in an overdue adjustment to corporate oversight, especially with respect to AI labs working on transformative technology.

A Brief History of OpenAI’s Corporate Structure 

To understand the significance of this corporate maneuvering and the barriers to the lab operating under its ideal structure, it’s important to go back to 2015, when Altman, Elon Musk, and several others co-founded OpenAI as a nonprofit. As a nonprofit, OpenAI committed to operating toward a specific purpose in order to receive the traditional benefits of that structure, such as tax exemption. As a legal matter, OpenAI effectively entered into a contract with California and Delaware to adhere to a specific mission: “to ensure that artificial general intelligence benefits all of humanity.” Such a broad mission presumably could include any number of activities, investments, and structures. OpenAI President Greg Brockman clarified on a podcast in 2019 that their “goal isn’t to be the ones to build it, our goal is to make sure it goes well for the world.” 

OpenAI posted a similar statement on its corporate blog in May 2023 reaffirming that it need not be the first to develop AGI in order to fulfill its mission and emphasizing the importance of ensuring that AGI benefits the public. These statements have legal significance: Case law in both California and Delaware permits consideration of extrinsic evidence in disputes over an entity’s adherence to its articles of incorporation.

By 2019, the lab determined that pursuit of its mission required a different corporate structure. OpenAI’s leaders reasoned that nonprofits cannot raise the sort of revenue necessary to develop frontier AI models—reliance on donations (even millions of dollars of donations) is a poor strategy for such a capital-intensive endeavor. So, the lab established OpenAI LP, a for-profit, “capped-profit” subsidiary. This pivot did not involve a turn away from the mission. All employees and investors in OpenAI LP encountered this language in its operating agreement

The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.

This unique (but not unheard of) structure was engineered to attract investment while aiming to preserve the nonprofit’s mission-centric approach. The nonprofit entity, OpenAI, Inc., maintained ultimate control over OpenAI LP. In contrast to more common for-profit entities, the capped-profit clause meant that investors’ returns were limited to a predetermined multiple of their initial investment, with any surplus profits intended to revert to the nonprofit parent to further its mission. This design seemingly reflected the lab’s desire to attract investors aligned with the long-term vision rather than those focused exclusively on maximizing financial returns, thereby keeping the organization oriented toward its public-interest goals. The nonprofit board consequently held a critical and, at times, controversial oversight function (such as during a turbulent period in which Altman was dismissed and then reinstated in a matter of days), tasked with ensuring the for-profit arm’s activities were consistent with the overarching objective of safe and beneficial AGI.

Now, Altman has again concluded that the lab’s current structure falls short of its mission. He blames a lack of foresight. In a letter to the company published on May 5, Altman admits that back in 2015, “We did not have a detailed sense for how we were going to accomplish our mission.” He adds, “We did not really know how AGI was going to get built, or used.” Accordingly, late last year, Altman explored removing the nonprofit’s oversight role and operating solely as a for-profit entity. Recall, however, that OpenAI effectively made a deal with California and Delaware to operate according to its mission to develop AGI aligned with the public interest above earning a profit. Many other researchers, academics, and AI experts penned a letter, “Not for Private Gain,” out of a concern that Altman’s for-profit plan clashed with that mission. They urged the attorneys general of California and Delaware to demand from Altman “answers to fundamental questions,” such as how the change would align with the nonprofit’s original mission and “protect [its] purpose by ensuring the nonprofit retains control.” 

On May 5, the lab announced its decision “for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.” In short, the nonprofit would no longer be confined to a charitable portfolio; rather, it would retain its supervisory function over a public benefit corporation (PBC). The PBC model would differ from OpenAI LP in important ways. Under Delaware law, a PBC “shall be managed in a manner that balances the stockholders’ pecuniary interests, the best interests of those materially affected by the corporation’s conduct, and the public benefit or public benefits identified in its certificate of incorporation.” In other words, the PBC could consider a broad set of interests in its operations, including but not limited to those of investors that seek as much of a return of their investment as possible.

How exactly the nonprofit would manage the PBC remains unclear. The Center for AI Safety observed that OpenAI’s announcement did not specify whether the nonprofit would hold a majority of shares or simply be a large shareholder. That said, an OpenAI spokesperson stated that the nonprofit would have the authority to appoint and remove the PBC’s directors. Absent such authority, it may be the case that the individuals tasked with enforcing the lab’s original mission have a limited ability to do so. OpenAI asserts that the current nonprofit board will continue “as the overall governing body for all OpenAI activities.” 

This has not placated the likes of Page Hedley, a former OpenAI employee and one of the signers of the Not for Private Gain letter. As Hedley conveyed to me over email, OpenAI’s announcement has “little impact” on the concerns he and others expressed in their initial letter. He explained, “OpenAI has a legally enforceable obligation to put the public’s interest over profits. That would not continue to be the case under the default implementation of OpenAI’s proposal.” Hedley and a smaller group of co-authors detailed their continued concerns in an updated letter sent to the California and Delaware attorneys general on May 12 (and made publicly available today). They identified several governance safeguards absent from the proposed structure. For example, the current proposal would not retain the limitation on investor profits. Hedley and his co-signers also pointed out that it is unclear whether the proposed entity’s leadership would have a legal responsibility to enforce the lab’s original mission, subject to ongoing oversight for fidelity to that mission by state attorneys general. They note that the proposed entity may not necessarily have to put mission over profit—a significant pivot away from an organization that once stood out from its profit-seeking competitors as a “a nonprofit AI scientific research organization.”

OpenAI responded to a request for comment on this piece by pointing me back to its May 5 post on its company blog. Though that blog does not definitively resolve the issues raised by Hedley and others in their most recent letter, it does suggest that the company is still in the process of determining the exact contours of its proposed restructuring. Per the blog, OpenAI “looks forward to advancing the details of this plan in continued conversation with the attorneys general, Microsoft, and our newly appointed nonprofit commissioners.” 

Ramifications

Whether OpenAI realizes AGI first or plays the role of helping society adjust to its ramifications, this episode exposes that existing legal structures are antiquated and ill-suited for the corporate governance challenges presented by rapid AI progress. As I wrote in a law review article, the evolution of corporate law led to state attorneys general no longer having the resources and political will to hold companies accountable. The very short story is that the U.S. approach to corporate law traces back to a time in which most corporations were small, local, and dedicated to a very explicit purpose. That’s no longer the case, as made clear by this current episode. As Adam Thierer and I pointed out recently in Lawfare, states likely lack the capacity to take on some of these major AI governance issues. Yet outdated frameworks now place the fate of OpenAI’s corporate structure in the hands of two state attorneys general.

The final structure adopted by OpenAI may have a significant impact on the timeline for AGI development. Changes to the proposed entity, for instance, may alter investor interest, the desire of employees to continue working with the lab, and the lab’s overall capacity to invest in more compute, data, and talent. The resolution of OpenAI’s structural conundrum will serve as a critical test case for the adaptability of existing legal and governance frameworks to the unique challenges posed by AGI development. Whether corporate law, historically designed for different scales and purposes, can effectively ensure that entities wielding the power to create AGI remain truly accountable to their foundational missions and the broader public interest is a question with implications reaching far beyond a single lab. 

My current answer to that question, as I have argued elsewhere, is that it’s time for an overhaul of corporate governance law, especially with respect to massive corporations with broad societal ramifications. Two key factors justify significant reform. Reliance on state authorities, such as attorneys general, risks creating a patchwork of conflicting laws with shifting odds of being enforced. The hodgepodge of regulations hinders innovation by leaving labs and investors guessing as to whether and how the law will be enforced. Relatedly, the resulting gray areas leave space for OpenAI and other larger labs to gamble with strained interpretations of the law to further their own interests, perhaps at the expense of smaller competitors. A clearer, uniform regulatory approach could at once vest enforcement authority in a more well-resourced public actor and ensure a more equal, predictable legal playing field for all actors. Part of that overhaul could involve the creation of a federal charter for frontier labs, providing an alternative to state-level incorporations. In this federal approach, a single authority such as Congress would review and approve the corporate structure of labs, rather than numerous state attorneys general. The public would have more meaningful opportunities to monitor and evaluate lab activity. Congress could allocate as many resources as it sees fit to ensure labs adhere to their specific missions and operate within specific structures. 

The OpenAI saga underscores a critical vulnerability: Our governance frameworks are dangerously outpaced by AI’s relentless advance. Moving beyond patchwork state oversight to a federal charter system for labs developing AGI is no longer a theoretical ideal, but an urgent necessity.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare .
}

Subscribe to Lawfare