How Antitrust Can Promote AI Safety Collaborations
Frontier AI labs want to collaborate to prevent catastrophe, but fear antitrust liability. Policymakers already have the tools to fix that.
In May 2024, Jan Leike resigned as OpenAI’s head of alignment and superalignment lead. He left with a blunt message that “over the past years, safety culture and processes have taken a back seat to shiny products.” A few months later, another OpenAI safety researcher resigned and noted that “[e]ven if a lab truly wants to develop [artificial general intelligence] (AGI) responsibly, others can still cut corners to catch up. Maybe disastrously. And this pushes all to speed up.”
As frontier artificial intelligence (AI) labs race to develop AGI, there is a real concern that competitive pressures will drive a race to the bottom on safety. Companies such as OpenAI, Anthropic, and Google DeepMind, among others, are competing in a hypercompetitive environment where the incentive to ship products quickly may override investments in safety research and responsible development practices. Genuine safety-oriented collaboration between frontier AI labs could significantly reduce catastrophic and existential risks. But AI labs face the fear that cooperation with competitors will trigger antitrust scrutiny from the Department of Justice or Federal Trade Commission (FTC). As Anthropic noted recently, “clarity on antitrust regulation would help determine whether and how AI labs can collaborate on safety standards.”
Antitrust law and policy rightly treat agreements between competitors with suspicion. While not all collaborations are anti-competitive, the mere perception that antitrust enforcers might take action can be enough to deter risk-averse companies from pursuing cooperation. In a domain where competitive development already incentivizes cutting corners on safety, this chilling effect is dangerous.
Clarifying how private companies can work together in the public interest would mitigate legal uncertainty and advance shared goals. Policymakers have tools to address this problem without abandoning the core principles of antitrust law. Congress could amend existing frameworks to provide safe harbors for safety collaboration, or model new legislation on the antitrust exemption already in place for cybersecurity information sharing. Even without legislative action, the Department of Justice and the FTC could reinstate and update regulatory guidance to signal that good-faith safety research will be analyzed under the rule of reason rather than condemned as illegal per se. These proposals draw on existing statutory and regulatory frameworks.
This piece examines how federal antitrust policy can evolve to accommodate AI safety cooperation. It surveys potential antitrust concerns raised by lab-to-lab collaboration and introduces legislative and regulatory proposals to provide legal clarity and safe harbors for responsible coordination. A longer treatment of these issues and proposals is available elsewhere.
The Case for AI Safety Collaboration
Frontier AI development presents a unique set of catastrophic risks. These risks have been well documented and include vulnerability to exploitation through prompt injection or jailbreaking; their potential to facilitate cyberattacks or contribute to chemical, biological, radiological, or nuclear weapons development; loss of human oversight through deceptive alignment; and susceptibility to unauthorized access by state and non-state actors.
Frontier AI labs have made public commitments to responsible development. OpenAI’s charter states that its mission is to ensure that AGI benefits all of humanity. Anthropic describes itself as being dedicated to building systems that people can rely on and generating research about AI’s opportunities and risks. But these are private companies with multibillion-dollar valuations, competing for leading talent with ever-growing pay packages. Even the most sincere public commitments to safety can buckle under competitive commercial pressure.
OpenAI itself acknowledges this dynamic, stating in its charter that it is “concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.” The risk is that model development outpaces safeguards, safety investments, and ethical baselines. Enmity between labs can increase the danger of an AI disaster. Conversely, collaboration can reduce hostilities and produce better outcomes by fostering mutual interdependence in which each lab becomes invested in others’ commitment to safety, creating shared incentives to uphold high standards and reduce the likelihood of reckless unilateral action.
Benefits of Collaboration
Research collaboration between competitors can yield significant benefits. As the European Union notes, cooperation in research and development is “most likely to promote technical and economic progress if the parties contribute complementary skills, assets or activities.” Until recently, the Department of Justice and the FTC similarly recognized that “consumers may benefit from competitor collaborations” through cheaper goods, more valuable products, and faster time to market.
In the AI safety context, collaboration could lead to greater risk reduction than incremental improvements at individual labs. Importantly, collaboration would not necessarily result in homogeneous offerings as labs would still compete on comparative advantages while adopting a joint safety-by-design approach.
Several forms of direct lab-to-lab collaboration could enhance safety:
- Safety testing and cross red-teaming: joint development and execution of standardized AI evaluations to improve detection of dangerous behavior and avoid blind spots unique to individual labs
- Incident sharing: disclosure of safety incidents and near misses to accelerate learning from failures and prevent repeated mistakes
- Information sharing: exchange of alignment methods, evaluation metrics, and best practices to facilitate rapid adoption of effective safety techniques
- Compute/resource pooling: shared access to infrastructure, enabling intensive testing and preventing safety from being deprioritized due to resource constraints
- Developmental pauses: agreements to pause development if certain safety thresholds are breached, providing time for investigation and mitigation
- Standard-setting: development and adoption of open technical standards for safety evaluations and model governance
In 2025, OpenAI and Anthropic conducted a “first-of-its-kind” joint evaluation exercise, demonstrating how labs can collaborate on safety. However, this collaboration focused on publicly released models—potentially due to the fear that collaborating on unreleased models would raise regulatory scrutiny. Deeper collaboration that requires access to pre-release models and proprietary safety research may face greater antitrust uncertainty.
Antitrust Concerns
Section 1 of the Sherman Antitrust Act prohibits “every contract, combination ... or conspiracy, in restraint of trade.” Courts usually apply one of two standards of review: Conduct deemed nakedly anti-competitive (such as price fixing or market allocation) is considered illegal per se, while most other agreements are analyzed under the rule of reason, which weighs pro-competitive benefits against anti-competitive harms. Overall, there is a risk that collaboration agreements can limit independent decision-making and reduce the participants’ economic incentive to compete.
Certain forms of safety collaboration present hard antitrust cases. One interesting example is a coordinated pausing scheme: an agreement among frontier labs to halt the development of certain model types when dangerous capabilities are detected, resuming only when adequate safety measures are in place. Under such a scheme, labs would notify competitors when a model fails safety thresholds, and collectively pause development until risks are mitigated adequately. A coordinated pause could be the most practical and useful safety intervention available. But it could also readily be construed as an output restriction—one of the “paradigmatic examples of restraints of trade that the Sherman Act was intended to prohibit.”
This is not a merely hypothetical concern. OpenAI commits in its charter to a form of voluntary pausing, by way of its “Assist Clause”:
[I]f a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be a better-than-even chance of success in the next two years.
If the Assist Clause only contained a commitment to “stop competing with the project,” then it may be construed as a purely unilateral action that lacks the key antitrust element of an agreement. But OpenAI goes further and commits to “stop competing with and start assisting this project.” Presumably, the competitor would have to consent (in some capacity) to OpenAI’s assistance, given the clarification that OpenAI will “work out specifics in case-by-case agreements.” Once operationalized, the Assist Clause might be considered an anti-competitive agreement to restrict output by limiting the production and release of a product, in concert with a competitor, contrary to Section 1 of the Sherman Act.
Of course, there may be reason to believe that the Assist Clause is never implemented. The charter does not make clear what would constitute a “value-aligned, safety-conscious project,” or when such an undefined project comes “close to building AGI.” The charter provides only that a “typical triggering condition might be ‘a better-than-even chance of success in the next two years.’” In the absence of measurable criteria, concerns about OpenAI’s shifting goalposts should be secondary to the threshold problem that no goalposts exist. At this point, the Assist Clause could raise antitrust issues, although the details of any antitrust action would turn on more specific facts.
Whether a collaboration actually raises antitrust concerns will depend entirely on the precise details of any agreement. But even if most safety collaborations would ultimately survive antitrust scrutiny, legal uncertainty can act as a powerful deterrent. As the Department of Justice and the FTC once recognized, “a perception that antitrust laws are skeptical about agreements among actual or potential competitors may deter the development of procompetitive collaborations.” Frontier AI labs facing intense commercial pressure may conclude that the legal, financial, and reputational costs of a regulatory action outweigh the benefits of collaboration. This chilling effect may prevent beneficial safety coordination even when such collaboration would likely survive legal challenges.
Policy Solutions
Legislative Reforms
The cleanest solution would be federal legislation that provides explicit protection for AI safety collaboration. Congressional gridlock makes this difficult but not impossible, particularly if proposals build on existing frameworks rather than creating new regulatory architectures.
One option is to expand the National Cooperative Research and Production Act (NCRPA). The NCRPA encourages joint research and development by clarifying that qualifying collaborations should be analyzed under the rule of reason rather than deemed illegal per se. It also provides potential recovery of attorneys’ fees and limits damages to actual rather than treble damages. Well-drafted AI safety collaborations could fall within the NCRPA’s broad definition of “joint venture,” which includes activities such as theoretical analysis, experimentation, prototype testing, and the collection and exchange of research information.
However, the NCRPA expressly excludes agreements that restrict output. Congress could address this gap by amending the statute to permit output restrictions demonstrably linked to risk mitigation, subject to a transparent, time-limited, and reviewable process. This carve-out could be drafted narrowly to apply only to specific safety triggers—for example, newly identified model vulnerabilities or emergent dangerous capabilities. The original supporters of the NCRPA argued that the threat of litigation hindered U.S. technological progress; safety advocates can argue that similar threats now hinder responsible AI development. Such an amendment would address the chilling effect while preventing abuse.
A second option comes from the Cybersecurity Information Sharing Act of 2015 (CISA). CISA provides an antitrust exemption for information sharing related to cybersecurity threats. This recognizes that real-time information sharing about high-impact threats serves the public interest. CISA provides that exchanging cyber threat indicators or defensive measures “shall not be considered a violation of any provision of antitrust laws” when done for cybersecurity purposes. There is some merit to the argument that sharing relevant research between frontier labs could fall within the existing language of CISA. But given the aforementioned uncertainty, it would be prudent to either insert an express AI-related exemption into CISA or introduce new legislation creating an antitrust exemption for AI frontier model risks.
Regulatory Guidance
A more immediate solution lies in regulatory guidance. The Department of Justice and the FTC could reinstate and update their Antitrust Guidelines for Collaborations Among Competitors—withdrawn in December 2024—to include explicit safe harbors for AI safety research. These guidelines provided an analytical framework and created antitrust “safety zones” for collaborations unlikely to have anti-competitive effects. The Department of Justice and the FTC voted to withdraw on the basis that the guidelines relied on outdated policy statements and did not reflect the evolution of the Sherman Act. Then-Commissioner (now Chair) Andrew Ferguson dissented from the vote to withdraw the guidelines. There may be an appetite within the newly composed FTC to draft new guidelines that provide clear signals that safety-oriented collaborations are encouraged while preserving enforcement discretion over genuinely anti-competitive conduct.
An alternative, and more onerous, solution could be for frontier AI labs to request a “Business Review” from the Department of Justice for specific collaboration proposals. The Business Review procedure allows entities to receive “guidance from the Department with respect to the scope, interpretation, and application of the antitrust laws to particular proposed conduct.” If the Department of Justice considers safety collaborations desirable, it could proactively encourage labs to seek such reviews, providing comfort for labs to engage with enforcers rather than avoid collaboration entirely.
* * *
Competition policy serves important goals: protecting consumers, preventing abuses of market power, and fostering innovation. Nothing in this argument suggests that frontier AI labs should be exempt from antitrust scrutiny. The AI industry already exhibits signs of concentration. The Taiwan Semiconductor Manufacturing Company enjoys a “near monopoly” over advanced semiconductors. ASML wields a “near-total monopoly” over extreme ultraviolet lithography (an essential input in chip manufacturing), and the Department of Justice is reportedly investigating Nvidia for alleged monopolistic practices. RAND has considered that the market for foundation models exhibits relevant characteristics of a natural monopoly. These are legitimate issues that antitrust is well suited to address. (Although others dispute the market concentration concerns.) But the real or perceived strictures of antitrust should not unduly restrict collaborations designed and executed in good faith to prevent catastrophic AI risks.
Coordinated safety measures are necessary as frontier AI labs race toward increasingly powerful systems. The proposals outlined here—expanding the NCRPA to accommodate safety-driven pauses, creating a CISA-style exemption for AI safety information sharing, and issuing clear regulatory guidance—are starting points for discussion. Policymakers have an opportunity to ensure that legitimate antitrust scrutiny does not inadvertently obstruct the collaborations that could prevent catastrophe. Striking the balance between innovation and regulation will require careful analysis and sustained engagement between industry and government.
