AI Companies Can’t Regulate Themselves. They Should Regulate Each Other.
Adapting a long-standing institutional model from financial regulation would let the industry write binding safety rules under government oversight.
Competition is preventing artificial intelligence (AI) safety. Anthropic recently abandoned its industry-leading safety guarantee for new model releases, stating that “[w]e didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” A company that invests more in safety deploys models later, loses customers, and risks losing the investors it needs to fund compute for the next generation. OpenAI faced the same problem and responded by cutting pre-deployment safety testing time. Effective AI regulation must address the collective action problem at the heart of AI risk.
Despite calls to regulate AI from disparate quarters, there is no consensus on how it should be done. Eighty percent of U.S. adults believe the government should regulate AI safety, even at a cost of slower progress. The CEOs of OpenAI and Anthropic have both called for regulation. But the institutional design of a regulating body has been neglected. There have been a few concrete proposals for creating a U.S. AI regulator in statute, but the broader commentary on AI governance has given the question too little thought.
The solution to this problem does not require inventing a new institutional form. For nearly a century, financial regulators have deployed and refined a model called federally supervised self-regulatory organizations, or SROs. Through SROs such as FINRA (the Financial Industry Regulatory Authority), industries govern themselves through binding rules subject to government approval and modification. The infrastructure for this already partially exists. Every major frontier lab except Elon Musk’s xAI belongs to the Frontier Model Forum, a body that, among other activities, coordinates risk management among members. What it lacks is statutory authority, mandatory membership, and government oversight—the features that distinguish an SRO from voluntary self-regulation.
Four Challenges Any Regulatory Framework Must Address
Any regulatory institution must contend with four problems. First, competition among labs is producing a race to the bottom on safety. The recent spate of high-profile resignations from frontier labs over safety concerns suggests that competitive pressure continues to overwhelm even new guardrails. This is a collective action problem; individual labs will keep sacrificing safety at the margin until they have a mechanism to coordinate without running afoul of antitrust law.
Second, a potential regulator faces information asymmetry. Details on training data, reinforcement learning techniques, safety evaluations, and capability assessments are essential for well-informed risk evaluation and mitigation but are either proprietary or tacit knowledge held by insiders. Even if outside experts with full access to these details could interpret such data, governmental inflexibility in hiring, salaries, and management is a further barrier to attracting and retaining them.
Third, AI evolves far faster than law does. This is known as the pacing problem: A framework calibrated for GPT-3 would need substantial revision for GPT-4, and the gap will only widen. A durable regulatory institution must be able to update its requirements without waiting on Congress or the slow, unpredictable accretion of judicial precedent. This compounds with information asymmetry, because the regulator must know all the latest changes in frontier AI and adapt rules in real time to keep pace.
Fourth, if the AI companies’ own warnings about AI risk are to be believed, regulation must account for harms that are irreversible and largely uncompensable. That demands ex ante intervention, such as pre-deployment evaluations and circuit-breaker mechanisms, before dangerous capabilities reach the public. Approaches that rely on after-the-fact liability or voluntary action inadequately mitigate catastrophic risk. By the time a court can assess what went wrong, the harm may be irreversible. Most advocates of such approaches recognize that deficit and formulate remedies that look increasingly like an SRO.
How Finance SROs Work
The Securities and Exchange Commission (SEC) has governed financial markets through SROs since 1934. Both FINRA and stock exchanges such as the New York Stock Exchange and Nasdaq operate as SROs. Public companies can choose which stock exchange to list on, selecting among different sets of rules. But broker-dealers have no such choice: FINRA is the only registration option, and every brokerage firm and registered representative must be a member to trade securities. Consequently, FINRA’s rules govern at least one side of virtually every securities transaction in the country, including every corporate bond trade. Its $1.3 billion budget, which comes from industry fees, is almost as large as the SEC’s.
The SEC maintains oversight through structural and governance requirements. For instance, it sets rules for board membership, such as the current requirement that a majority of directors be independent from the industry. Within SEC-set constraints and subject to its approval, SROs write the rules that actually govern member conduct. Their scope is broad: FINRA rules cover ethical conduct, customer protection protocols, employee supervision, compliance, recordkeeping, fraud, anti-money laundering, systemic risk, and more.
The rulemaking process illustrates how the model balances industry expertise with public accountability. Minor or noncontroversial rule changes take effect immediately upon publication, with a 60-day window for the public to comment and for the SEC to reverse the change. A wholesale revision to a rule’s text goes through a preapproval public comment cycle and often takes a few months. These two tracks far outpace both legislation and litigation.
SROs also exercise substantial enforcement authority. They monitor member operations, conduct examinations, and investigate suspected violations. Available sanctions include fines, restitution, suspension, and permanent bars ending a broker-dealer’s ability to operate. Decisions can be appealed to the SRO, then to the SEC, and ultimately to federal court.
An SRO for AI
The regulatory challenges that motivated the SRO model in finance—information asymmetry, rapid innovation, systemic risk, and coordination failures among competitors—mirror the challenges of AI. The translation of the SRO structure from finance to AI is intuitive. An AI SRO would not address every risk the technology poses, from job loss to copyright infringement. It would target the catastrophic risks that the frontier labs themselves have identified as most salient, such as CBRN (chemical, biological, radiological, and nuclear) hazards, advanced cyberattacks against critical infrastructure, and threats from autonomous AI behavior.
Congress could pass legislation creating a supervising agency and mandating that every AI company meeting certain parameters join. These parameters could include compute size, revenue, and AI research and development spending, aiming to capture frontier labs without preventing new entrants. The supervising agency could draw from expertise currently housed in the Center for AI Standards and Innovation (CAISI) but should be a new creation—like the SEC was—to preserve CAISI’s existing remit and avoid the political baggage of rival candidates such as the Federal Trade Commission or the Federal Communications Commission. The agency would recognize the Frontier Model Forum (FMF), which is already a proto-SRO, as the industry-run self-regulatory organization. FMF would create a board according to agency requirements and begin promulgating rules. The budget would come from fees paid by AI labs without relying on annual appropriations from Congress. The supervising agency would have final say over SRO rules and enforcement actions to comply with the constitutional private nondelegation doctrine.
The SRO would be governed by a board that balances independent directors focused on safety with industry representatives attuned to development speed. Each member firm would hold a board seat, with additional independent directors appointed subject to the supervising agency’s approval. To further reduce information asymmetry, research and safety staff from the labs could be seconded to serve on technical committees writing audit rules. Industry funding would enable the SRO to pay staff market wages without government salary restrictions. Concerns about industry capture of an SRO are real, but any regulatory scheme—including the status quo—faces this challenge. Tech companies already exert enormous influence over AI policy through opaque lobbying; an SRO would replace that with a formal, public process in which the government and outside parties have a structured role.
The rulemaking process solves the pacing problem. As with FINRA, full rule proposals would go through agency review and public comment, with statute requiring approval in months rather than years. Minor changes—including updates to testing benchmarks and adjustments to evaluation protocols as capabilities evolve—could take effect immediately, with the agency retaining a window to suspend and reconsider. Legislatures and courts cannot match this speed. Substantive SRO rules could also include red teaming requirements, minimum pre-deployment testing periods, minimum revenue allocation to safety, public disclosure of safety protocols, and third-party auditing. A lab that released a model in violation of these rules could face temporary suspension of the model, fines, or in extreme cases a bar ending its ability to operate.
Consider the following hypothetical: A frontier lab develops a model that scores above a defined threshold on an FMF bio-risk evaluation benchmark. Under FMF rules, the lab must submit the results of a specified battery of additional safety tests before deployment. Inadequate results require further mitigation or blocking of the model’s release. If the lab deploys without completing the process, FMF can suspend the model and impose fines carrying the force of federal law. If the supervising agency has concurrent enforcement authority, like the SEC has, it could sue to suspend the model if FMF failed to act.
Most fundamentally, mandatory membership directly neutralizes the race to the bottom. FINRA governs more than 3,000 broker-dealers in a fiercely competitive market, yet because Congress has enshrined the SRO structure in statute, FINRA can foster coordination on standards of conduct among rival firms without triggering antitrust liability. An AI SRO would do the same. No lab would face a competitive disadvantage for investing in safety, because every lab would be held to the same standard. Anthropic did not abandon its safety commitments because it wanted to; it did so because the competitive structure left it no choice. An SRO changes that structure.
Where Other Models Converge
There are many possible regulatory approaches to frontier AI. As each confronts the structural problems outlined above, however, their designs begin to resemble a supervised SRO. The regulatory approaches currently in place fail to even address the structural problems. Labs currently govern themselves through purely voluntary safety frameworks with no mechanism to prevent competitors from undercutting those who invest in safety; an SRO would resolve this race to the bottom with an antitrust exception for coordination and meaningful government supervision. An SRO also addresses the pacing problem and information asymmetry that plague direct regulation. Even the EU AI Act, the most ambitious direct regulatory effort, delegates technical standard-setting to private bodies composed mostly of industry participants, converging on the same logic a supervised SRO makes explicit.
An SRO could also shore up existing or future tort liability regimes, which struggle to address all four challenges. Labs will discount tort liability catastrophic risk, knowing that they will be judgment-proof after a catastrophe anyway. Proposals to address that discounting, such as levying punitive damages for “near miss” events, encounter the pacing problem—law must define limits, damages, a “near miss,” and more, through either permanent legislation or slow-moving court precedent. And standard tort law does not address the race dynamic. A more sophisticated scheme such as shared residual liability might, but it functionally recreates an SRO through one-on-one firm bargaining—without transparency, democratic accountability, or public participation—and fails to address moral hazard without relying on a complex insurance apparatus.
While insurance can fill some of the gaps left by pure tort liability, it works best when paired with government regulation. The nuclear industry, sometimes held up as a paragon of insurance-as-regulation, is closely monitored by the Nuclear Regulatory Commission and has a self-regulatory organization (unsupervised) that it created after direct regulation and insurance both failed to avert Three Mile Island. Moreover, the entire pool of private nuclear insurers provides only $500 million policies (for comparison, a single commercial aircraft is insured for $750 million to $2 billion). The government itself provides an additional tier of insurance above that, because insurers struggle to cover genuine catastrophes.
Cybersecurity insurance, the other most frequent analogy for AI insurance, demonstrates that information asymmetry and pacing problems in the digital domain often thwart underwriters. Data is proprietary, quickly outdated, and difficult to interpret. Insurers have successfully reduced insureds’ cybersecurity risk through policy enforcement in the past several years, after decades of failure, but not through sophisticated digital techniques. Baseline cybersecurity is so poor that the improvement is largely due to companies adopting long-standing practices such as multi-factor authentication and timely software patch installation. Cybersecurity teaches that insurance can reduce worst practices, but frontier AI regulation should seek to identify and enforce best practices for addressing catastrophic risk from cutting-edge models.
Many other approaches have many similarities to SROs but miss the full benefits of the formal structure. The “regulatory markets” proposal for AI regulation, for example, looks like an inchoate SRO when it gets specific. In that system, the government organizes a separate “expert group” that sets mandatory standards and licenses third-party auditors whose approval is required for model deployment. The group updates those standards in real time to adapt to evolving risks. The proposal envisions the group only setting outcome-based requirements, but verifying that auditors are producing valid results obliges the group to develop and run its own tests, duplicating the auditors’ work. Rather than maintaining two layers of technical review, an SRO could perform the audits directly, or keep third-party auditors but set process-based requirements that are easier to verify.
A similar model of “private governance” resembles government supervision over multiple, competing SROs (similar to stock exchanges in finance). It offers a liability shield to firms that opt into private governing bodies certified by the government, enabling firms to avoid regulation entirely if they join. Members can lose their shield if another member’s safety lapse triggers decertification of the body as a whole. This collective punishment is designed to prevent the race to the bottom, because firms have an incentive to only join bodies whose standards and enforcement are rigorous enough to keep the certification intact. But the system is also supposed to accommodate bodies governing niche applications with few members, undercutting the peer pressure that makes defection costly. The solution would be to require every body to exceed a minimum membership threshold; this effectively creates an SRO, missing only the legal framework, agency oversight, and public rulemaking process that make SROs democratically accountable and enable the government to set proactive standards instead of reacting to failures post hoc.
A pending legislative effort, California’s SB-813, also converges toward an SRO-like structure but stops short of its necessary features. It establishes a government commission to set standards for and license “independent verification organizations,” which can grant liability benefits to the AI companies they verify. Like the other models, the commission would be a government creation with ongoing supervisory authority over private certification, resembling an SRO’s basic structure. But it gives AI companies no direct role in setting standards, reproducing the information asymmetry and pacing problems that hamper traditional regulation. And because participation is voluntary, it cannot resolve the competition that drives labs to underinvest in safety.
SROs’ Weaknesses
After nearly a century, finance SROs have had their share of scandals, criticism, and reform (and unrealized proposals for reform). That scrutiny and experimentation is, however, an advantage over both untested regulatory designs and sclerotic federal agencies. A new SRO can learn from past mistakes, such as the 1994 “odd-eighths” scandal that led the SEC to require more governing board diversity, including seats reserved for the public, or the Bernie Madoff scheme that prompted FINRA to create a whistleblower office. The SEC has evaluated the SRO model against institutional designs and consistently concluded that the advantages in expertise, flexibility, and cost outweigh its known weaknesses in capture and regulatory protectionism, which can be mitigated.
Many unresolved critiques of FINRA can serve as design inputs for a new SRO. Complaints that FINRA is insufficiently transparent, or has a conflict of interest in spending money collected from enforcement fines, could be addressed in a new SRO’s enabling statute. Similarly, legislation could give investigation subjects Fifth Amendment-style protections they lack today.
An AI SRO would differ from FINRA in meaningful ways. FINRA has more than 3,000 members; an AI SRO would have, perhaps, a dozen—each with outsized political and cultural influence. Smaller membership concentrates power and requires countermeasures such as robust independent board representation, strong whistleblower protections, and a supervisory agency willing to intervene. But having fewer members also alleviates many of the problems finance faces, where SEC commissioners struggle to monitor myriad rule changes from various organizations and a fraudster like Bernie Madoff can hide among thousands of broker-dealers. Public and agency oversight of the handful of frontier AI labs would be far more focused and thorough.
Similarly, the risk that incumbents write rules to entrench themselves is less acute in AI than for mom-and-pop broker-dealers. Frontier models already cost billions of dollars in compute; compliance costs are not a meaningful moat. Agency oversight can mitigate what risk does exist, as can the public comment process that enables potential entrants and their investors to flag anticompetitive rules before they take effect.
An SRO Is Politically Viable
Creating an SRO would not be easy, but it is the most politically viable path to regulation. Right now, neither the innovation nor the safety camp can get what it wants alone. Innovation advocates cannot achieve federal legislation preempting state AI regulation without safety advocates’ support. Safety advocates cannot get enforceable rules if industry blocks all legislation. An SRO can satisfy both: It has binding rules and real enforcement for the safety camp, direct participation for industry in writing those rules, and uniform standards that allow labs to invest in safety without facing a competitive penalty.
Alternative proposals require Congress and regulators to get the technical details right from the start, specifying what safe AI looks like in terms that survive the next generation of models. An SRO requires them only to build the structure within which those details get worked out by the people who understand them best, subject to public scrutiny and government override. The institutional template exists. The political coalition is available. What remains is the decision to use them.
