Cybersecurity & Tech States & Localities

California Enacted AI Bills. Now Officials Must Define Them.

Justin Curl
Tuesday, December 9, 2025, 1:00 PM
The impact of upcoming AI legislation hinges on how officials define key terms like “frontier model” and “reasonable measures.”
The California State Capitol in Sacramento. (Steven Pavlov, https://tinyurl.com/35uebr33; CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/deed.en)

California Gov. Gavin Newsom recently signed artificial intelligence (AI) bills on catastrophic risk (SB53), content provenance (AB853), pornographic deepfakes (AB621), and chatbot companions (SB243). The natural question for AI policy in California is: What comes next?

One overlooked answer is operationalizing terms like “frontier model” and “reasonable measures.” How the California attorney general and other state officials define these terms will determine the practical impact of recent AI legislation. They should aim to resolve ambiguities before the statutes take effect to encourage good-faith compliance and set standards for judicial review. The alternative is further delays to enforcement—AB853, for example, delayed the effective date of provenance standards—or years of uncertainty as definitions are clarified through enforcement actions and litigation.

Here’s a look at key ambiguities in the laws, and how state officials might resolve them.

“Frontier Model” (SB53)

SB53 regulates AI developers who create frontier models. It defines a “frontier model” as one trained with more than 10^26 floating point operations (FLOPS), which is a measure of the computational resources expended. This threshold reflects “scaling laws” finding that models trained with more compute tend to have better capabilities. But applying this definition is difficult in two ways.

The Open-Weight Models Question

Many developers build on open-weight models like Qwen, yet their obligations under SB53 are unclear. For example, if developers fine-tune an open-weight model, should they include the pre-training compute for the base model in calculating whether they’ve developed a frontier model?

The statute seems to say yes: The compute-based threshold considers compute across “pre-training, fine-tuning,” and other stages. But this creates practical difficulties with tracing compute usage. Model developers don’t always disclose how much compute they used (Alibaba, for example, did not disclose the compute used to train Qwen), so downstream developers may not know whether a model has crossed the 10^26 threshold. Neither would regulators trying to enforce the law.

And even when compute disclosures are available, a cumulative approach might sweep in companies that seem far from the statute’s intended targets. SB53 applies to companies with revenues exceeding $500 million that train a frontier model. Since Airbnb uses Qwen extensively, if it fine-tunes that model for customer service, it might technically qualify as a frontier developer—even though its core business has nothing to do with advancing AI capabilities.

But if the statute does not take a cumulative approach, application developers like Cursor, Harvey, and Character.AI could potentially circumvent the statute by fine-tuning open-weight models. Even though they would be deploying models with capabilities at or near the frontier, these developers would have limited oversight.

California’s Department of Technology, which can update this definition, should aim for a middle path. It could count the cumulative compute for a model but limit “frontier developer” to companies with a substantial AI development business (that is, cover Google and Microsoft but not Airbnb). This approach has its own ambiguities, but it at least draws a line that avoids the most severe cases of over-inclusiveness (covering companies fine-tuning frontier AI models) and under-inclusiveness (exempting companies building on open-weight models).

The Model Distillation Question

Distillation involves training a separate “student” model to emulate larger, more capable “teacher” models. A distilled model can rival the teacher model’s capabilities while using far less compute in training. This strains the viability of compute-based frontier AI regulation by widening the disconnect between compute and capabilities.

The Department of Technology may need to include a teacher model’s compute in evaluating whether a distilled model qualifies as a frontier model to avoid enforcement gaps. This admittedly blurs the line between models and may be difficult to implement for the tracing reasons mentioned above with open-weight models. Yet without including that compute, developers could bypass SB53 by training a larger teacher model and distilling it into a smaller model with frontier capabilities that fall outside the statute’s scope.

Because New York’s RAISE Act also adopts a compute-based definition of a frontier model, the New York attorney general will face these same implementation questions. For more on challenges with compute-based thresholds, see Venket Somala, Anson Ho, and Séb Krier’s excellent essay.

“Reasonable Measures” Preventing Production of Sexual Content (SB243)

SB243 requires chatbot developers to take “reasonable measures” to prevent their products from generating sexually explicit material during conversations with minors.

What counts as reasonable often depends on prevailing industry practice. If most companies adopt one approach (perhaps filtering sexually explicit inputs and outputs), it’d be hard to convince judges that “reasonable measures” require going beyond what others have done, especially when they can cite higher costs or degraded performance as reasons for not doing more. This means reasonable measures may raise the floor, not the ceiling, on safety approaches. If industry practice defines reasonableness, and companies know this, they have less incentive to exceed minimum standards.

The Office of Suicide Prevention (OSP), which oversees this statute, could overcome these incentives by issuing guidance on acceptable error rates, jailbreak resilience, and performance trade-offs. As the implementing agency, OSP’s guidance would likely be given greater weight by courts, though OSP may need to hire technologists to properly define these measures.

Such guidance can be particularly useful in navigating the trade-off between false positives (blocking legitimate requests) and false negatives (allowing harmful content through). Companies have natural incentives to reduce false positives, which can frustrate or annoy users. The incentives to reduce false negatives are weaker. Many minors receiving inappropriate content don’t report it (since they requested it), and those who do often lack effective channels for doing so. Ultimately, because of the unique and substantial harms of chatbots producing sexually explicit content for minors and companies’ incentives, OSP should issue guidance that prioritizes reducing false negatives, even at the cost of overrefusals.

“To the Extent Technically Feasible” for Provenance Data? (AB853)

AB853 requires large online platforms to detect and label AI-generated content “to the extent technically feasible.” This qualifier could create an exception that swallows the rule.

Current provenance technologies have known limitations. Watermarks and metadata can be stripped from images by taking screenshots, compressing files, or cropping images. A platform might argue that once provenance data is removed, detecting it is no longer “technically feasible.” This reading isn’t unreasonable given the statutory language, but it might limit the law’s practical effect.

It’s also unclear what factors bear on feasibility. Does cost matter? What about performance impact? If so, how much is too much? Without guidance, “technically feasible” could mean different things to different platforms and courts.

The California attorney general might consider clarifying the level of watermark robustness, error rates, and efficiency trade-offs that it views as technically feasible. He could set a technique-agnostic standard that preserves platforms’ flexibility. Creating this standard would again likely require hiring internal technologists, which Attorney General Rob Bonta has already announced plans for.

The statute also doesn’t specify when feasibility should be assessed. This creates uncertainty about whether developers must update their systems as new provenance techniques become available. If feasibility is measured only at deployment, platforms might not be obligated to adopt better detection methods that emerge later. But if developers must continuously update their systems, this can make it hard for platforms to know whether they’re actually in compliance.

The California attorney general could require a periodic assessment of whether new techniques have become feasible and should be adopted. This review could require developers to adopt new methods within a reasonable time frame (for example, 12-18 months), which helps the statute keep pace with technological progress without imposing continuous compliance costs.

A Creator “Reasonably Should Know” They Lacked Consent (AB621)

AB621 requires express written “consent” that includes a “general description of the digitized sexually explicit material.” The statute suggests that without this specific form of consent, a creator “reasonably should know” they did not obtain valid consent.

This could be read strictly: Without a statutory form, there is no valid consent, regardless of what informal agreement might have existed. Or it could be read more flexibly, as one factor in assessing the creator’s state of mind. The strict reading offers clarity and strong protection for victims, while the flexible reading might better handle unusual cases where genuine consent existed but wasn’t documented in the precise form the statute contemplates.

Regardless of which reading is correct as a matter of statutory interpretation, the California attorney general should issue guidance on when it would prosecute cases, to reduce litigation uncertainty and help potential defendants understand their obligations.

What Needs to Happen and When?

The table below summarizes when recent AI bills will go into effect, highlighting which definitions should be clarified first.

Effective DateBillCore FocusDetails/Notes
1/1/2026SB53Catastrophic risk disclosuresRequires developers of models exceeding certain compute thresholds to disclose safety data and protects whistleblowers
1/1/2026SB524Law enforcement AI useRequires law enforcement agencies to maintain audit trails and identify the use of AI systems
1/1/2026AB316Liability defenses for AIIn civil litigation, prohibits using the defense that an AI system autonomously caused harm
1/1/2026AB621Deepfake pornographyCreates a private right of action against creators and service providers involved in distributing nonconsensual AI-generated pornographic content
8/2/2026AB853AI Transparency Act (Phase 1)Delayed effective date for the original requirements of the AI Transparency Act (SB942), including providing users with a free AI detection tool
1/1/2027AB853AI Transparency Act (Phase 2)Requires large online platforms to display provenance data (source/origin) for AI-generated content uploaded by users
1/1/2027AB1043Age verification infrastructureCreates infrastructure to help enforce other state age-based statutes online
1/1/2027SB243AI companion reportingRequires developers of AI companion systems to submit annual reports detailing safety measures and usage data to the state
1/1/2028AB853AI Transparency Act (Phase 3)Requires capture device manufacturers (e.g., phones, cameras) to offer users the option to embed latent disclosure/provenance data
1/1/2028SB243AI companion core requirementsMandatory disclosures, safety protocols, and a private right of action for violations related to AI companions

With laws on AI on the books in California, it’s easy to declare victory. Officials could let terms like “frontier model” and “reasonable measures” be defined through the standard process of enforcement actions and litigation. Companies would make their best guesses about compliance. Public prosecutors and private plaintiffs would sue. Courts would issue conflicting rulings. Eventually, after years of litigation, something resembling clarity would emerge.

This approach can be appealing for officials reluctant to take controversial positions. But the recent AI laws are only as effective as their implementation. These statutes entrust state officials with some interpretive discretion, which they should use to give these laws the clarity they need to operate well. Otherwise, California risks a regulatory regime that imposes compliance costs on companies but doesn’t accomplish much in practice.


Justin Curl is a J.D. candidate at Harvard Law School currently serving as the Technology Law & Policy Advisor to the New Mexico Attorney General. He's interested in technology and public law, with a research agenda focused on algorithmic bias (14th Amendment), binary searches (4th Amendment), and judicial use of AI. Previously, he was a Schwarzman Scholar at Tsinghua University and earned a B.S.E. in Computer Science magna cum laude from Princeton University.
}

Subscribe to Lawfare