Governing Frontier AI: California’s SB 53

Published by The Lawfare Institute
in Cooperation With
In late September, California Gov. Gavin Newsom signed Senate Bill 53 (SB 53), the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first U.S. state to enact legislation specifically aimed at regulating advanced AI systems. In the United States, technological development and adoption usually outpace regulatory actions. The passing of this legislation in California—home to most of the world’s leading AI companies and research labs—marks a key milestone in policymakers’ attempts to address the potential catastrophic risks posed by AI. With implementation scheduled for January 2026, SB 53 builds a governance architecture for frontier AI that emphasizes transparency, whistleblower protection, public infrastructure, and adaptive oversight, seeking to balance safety and innovation. As with the state’s privacy legislation, the 2018 California Consumer Privacy Act and 2020 California Privacy Rights Act—which are also America’s first comprehensive privacy laws—SB 53 reflects California’s leadership role in setting standards and norms for emerging technologies.
Main Pillars of SB 53: Transparency, Accountability, and Innovation
The core focus of SB 53 is to bring visibility, accountability, and public oversight to frontier AI models, which have so far been developed largely behind closed doors by a handful of private labs. The law aims to mitigate catastrophic risks from these frontier AI models—systems trained with vast computational resources that could, in some domains, operate autonomously at or beyond human capability. SB 53 defines “catastrophic risk” as a foreseeable and physical danger that a frontier model could materially contribute to mass harm, such as causing more than 50 deaths or over $1 billion in damage, by enabling weapons of mass destruction, conducting serious crimes or cyberattacks without sufficient human oversight, or evading its developer’s control. By limiting its application only to the most significant industry players—those with gross annual revenues of more than $500 million—this legislation ensures that the regulatory responsibility falls on those with the most powerful and potentially highest risk systems. The legislation requires companies training large-scale AI models (frontier developers) to publish basic information of the model even before generating revenue, while the majority of reporting obligations outlined in SB 53 apply only once a company’s annual revenue exceeds $500 million (large frontier developers). In doing so, the law ensures that the heaviest regulatory burdens fall on the most powerful and potentially highest-risk players.
Transparency and Accountability
The first pillar of SB 53 is transparency—in terms of both what companies must disclose to the public and how they are held accountable when risks emerge. The law requires frontier developers to publish a “frontier AI framework” describing how they incorporate national and international safety standards and industry best practices into their systems. Before deploying new or “substantially modified versions” of existing frontier models, developers must publicly release safety and security reports explaining how they evaluate and reduce catastrophic risks, from misuse to model failure. And if serious safety issues occur, developers are obligated to report these incidents to the California Office of Emergency Services.
These obligations formalize practices that a few frontier developers—such as Anthropic, which publishes detailed system and risk assessment documentation—already follow voluntarily. But the legislation also compels transparency from labs that have not followed best safety practices.
xAI, for instance, has drawn criticism for opaque safety testing and the lack of public reporting after its chatbot Grok was taken offline for producing anti-semitic content.
SB 53 also introduces robust whistleblower protections, reflecting growing concern over how AI companies handle internal dissent. That concern was crystallized in May 2024, when OpenAI faced backlash for pressuring departing employees to sign sweeping nondisclosure and nondisparagement agreements that would have barred them from criticizing the company—even over publicly known information. The controversy helped build momentum and consensus for stronger legal safeguards, including the bipartisan AI Whistleblower Protection Act introduced in Congress. Under SB 53, employees can report safety concerns about “catastrophic risks” to state or federal authorities without fear of retaliation, and developers cannot use contracts or policies to silence whistleblowers. Developers must also create anonymous internal reporting channels for employees to report safety concerns to executives at large AI companies.
Innovation and Public Infrastructure
SB 53 also aims to enhance the conditions for innovation by creating CalCompute, a state-backed consortium to be based at the University of California and tasked with designing a public cloud computing cluster. The bill assigns California’s Government Operations Agency to develop a framework for the scale, structure, and operation of CalCompute by January 2027. This initiative addresses one of the structural constraints of the AI ecosystem: Access to large-scale compute—the infrastructure necessary to train and run cutting-edge models—is concentrated in the hands of a few well-funded firms. That high barrier to entry limits who can participate in frontier development and reinforces existing power asymmetries in the industry. CalCompute is designed to democratize access to these resources, providing startups, academic researchers, and nonprofits with the computational capacity to compete and innovate. Other U.S. states and countries are also developing public computing infrastructure, including proposals for shared AI compute reserves at the state, national, and international levels.
Adaptive Governance
SB 53 acknowledges that one of the central challenges of regulating a rapidly evolving technology like AI is that regulations often become outdated before they can even be implemented. To bridge this gap, the legislation builds in a feedback loop that allows the regulatory framework itself to adapt promptly. It directs the California Department of Technology to recommend annual updates to the law, taking into account new technological capabilities, emerging safety research, and developing international norms. This mechanism is intended to prevent the catch-up game that has plagued previous efforts to regulate emerging technology.
Building Consensus: How SB 53 Took Shape
SB 53’s success was shaped by an earlier failure. In 2024, Newsom vetoed SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, a sweeping bill that also focused on frontier AI and would have required mandatory third-party audits, “kill-switch” shutdown mechanisms, and developer liability for catastrophic harms. While Newsom shared the goal of preventing catastrophic risks, he argued that SB 1047 would apply a blanket restriction on all applications of frontier AI and was misaligned with the pace of technological change.
After vetoing SB 1047, Newsom convened a group of researchers and policy experts, called the Joint California Policy Working Group on AI Frontier Models, and tasked them with producing concrete recommendations for future legislation. The group’s final report, released in June 2025, called for “targeted interventions” that could balance the transformative potential of AI with the need to mitigate its most serious risks. California state Sen. Scott Wiener—the author of both SB 1047 and SB 53—revised SB 53 to align more closely with the report’s recommendations.
In addition, SB 53 reflects a pragmatic compromise among the different perspectives within the AI expert community. The AI governance discourse can be divided into three camps, also called the AI triad, a framing coined by Harvard Law Professor Jonathan Zittrain: accelerationists, who see rapid AI development as essential to economic and geopolitical leadership; safetyists, who warn of existential risks that demand caution and control; and skeptics, who argue that focusing on long-term existential threats distracts from immediate challenges such as bias, fraud, environmental costs, and disinformation. SB 53 attempts to reconcile these competing visions by acknowledging catastrophic risk without attempting to halt innovation and by institutionalizing accountability without imposing prescriptive mandates.
That said, the tech industry has largely opposed the bill. OpenAI, Meta, Google, and the Chamber of Progress have lobbied against SB 53, warning that it will require duplicative reporting and stifle innovation. Anthropic is the only major developer to openly support the bill. Safety advocacy groups also see the legislation as a victory. Notably, even some opponents of SB 1047—like Andreessen partner Martin Casado, Harvard researcher Ben Brooks, and former White House AI policy adviser Dean Ball—have praised SB 53 as a more balanced and technically realistic approach to govern frontier AI.
SB 53 and the Future of AI Governance in the U.S.
SB 53 sits within a fragmented and still-evolving U.S. regulatory landscape for AI. At the federal level, there is no comprehensive law governing AI development and deployment. In the meantime, states have stepped in—but in recent years, their effort has shifted away from comprehensive regulation toward more narrowly targeted issues. Several states, for example, have passed laws restricting the use of AI for mental health services. Nevada’s AB406 bans schools from using AI to perform the duties of counselors or psychologists and prohibits developers from making misleading claims about their models, while Illinois (HB1806) and Utah (H.B. 452) have passed similar restrictions. On the frontier safety front, New York’s RAISE Act carries similar measures to SB 53 and is awaiting New York Gov. Kathy Hochul’s signature.
These state-led efforts unfold against the backdrop of an increasingly contentious debate over whether AI regulation should be driven by states or the federal government. Earlier this year, Republican lawmakers sought to include a 10-year moratorium on state AI regulation in the “One Big Beautiful Bill”—a measure that was ultimately voted out but that Sen. Ted Cruz (R-Texas) has vowed to revive. The Trump administration’s AI Action Plan also proposes penalizing states that enact AI rules—indicating a preference for a federal-led approach to AI governance. Advocates of preemption argue that a patchwork of state laws could create redundancy, inefficiency, and technical inconsistency. However, public opinion polling shows that most Americans favor stronger regulation, with one survey finding that more people worry that the U.S. government won’t do enough to regulate AI than fear it will overregulate.
With most frontier-AI companies headquartered in California and conducting substantial business there, the state’s law is likely to influence corporate practices nationwide and shape the trajectory of future policy debates. Yet what makes SB 53 particularly novel is its built-in bridge between state and federal oversight: The bill allows California regulators to recognize forthcoming federal laws, regulations, or guidance as satisfying state requirements—even if those federal measures don’t formally preempt state law. This mechanism allows companies to opt into compliance with state law through a federal alternative, signaling a new form of “cooperative AI federalism.” In doing so, SB 53 represents an effort to build consensus in a fractured regulatory environment—a state-led attempt to balance innovation and risk mitigation while federal lawmakers continue to debate how, and by whom, AI should be governed.