Courts & Litigation Cybersecurity & Tech

New AI Transparency Rules Have a Trade Secrets Problem

Julius Hattingh
Monday, September 15, 2025, 3:21 PM
Recent AI legislation seeks to keep the public informed, but developers may be able to dodge accountability by invoking trade secrets. 
Magnifying glass over code. Photo by Kristina Alexanderson (internetstiftelsen), CC-BY-SA

Published by The Lawfare Institute
in Cooperation With
Brookings

A spate of recent artificial intelligence (AI) transparency laws introduced in New York, California, Michigan, and Illinois seek to codify a practice most frontier AI developers have already adopted voluntarily (with notable exceptions): implementing and publicly releasing “safety and security protocols” (SSPs) for their most advanced AI models.

 An SSP (which also goes by names such as “Frontier AI Safety Framework” or “Responsible Scaling Policy”) is a developer’s framework for managing and mitigating catastrophic risks. That is, risks of an AI model causing mass casualties or billion-dollar damage—for example, by enabling the creation of a chemical, biological, radiological, or nuclear attack, or by autonomously carrying out an action that would be a serious crime if performed by a human.

In light of these risks, the laws would require that SSPs provide specific details about how the safety and security of an AI model has been tested and evaluated, the results of those tests, and how risks will continue to be monitored and managed, along with other safety-critical information. By requiring SSPs to be released publicly, the laws would enable external parties to study and scrutinize a developer’s framework and hold the developer accountable to it. From a transparency perspective, this is a first step toward creating standards to help the government, researchers, and the general public understand and respond to this extraordinary technology and the companies developing it.

Yet despite their promise, one reason to doubt these bills will work as intended—and why they could even backfire—is their treatment of developers’ trade secrets. Each of the four bills appropriately allows developers to redact certain sensitive information from an SSP before publishing it to the world: for example, where disclosure would threaten public safety, national security, cybersecurity, or a developer’s trade secrets.

It is not at all surprising that trade secrets are protected from mandatory disclosure. AI companies have extremely valuable secrets, and there are strong reasons for the law to protect them. What is surprising (and concerning) is the degree to which they are protected under the proposed bills. In the draft form of these bills, developers would be entitled to redact any details they consider necessary to protect their trade secrets, without a process for verifying or contesting the legitimacy of those redactions and even if they render the public version of their SSP incomprehensible or otherwise obscure the safety profile of an AI model.

Two implications of the currently contemplated trade secret exemptions warrant further consideration. The first is that they generally allow potentially safety-critical information to be redacted, which could prevent a public SSP from serving its purpose. The second, less immediate implication is that their unqualified protection of trade secrets could weaken the authority of states to impose more stringent disclosure rules in the future. Trade secrets are protected as property under the Fifth Amendment’s Takings Clause. As a result, a law mandating their public disclosure could be challenged by a company with a “reasonable investment-backed expectation” that its trade secrets would not be interfered with. As drafted, the bills risk promoting this very expectation—namely, that even in the context of high-stakes safety risks, the fact that some information meets the test for trade secrecy will always entitle it to protection.

Two modifications would significantly improve the balance these bills strike between transparency and companies’ legitimate interests. First, the trade secret exemption should be balanced against a public interest exception, such that information cannot be withheld on grounds of trade secrecy when the public interest in disclosure is sufficiently high. While this could be framed in different terms, the key point is that a developer’s economic interest in keeping information secret should not necessarily override the public interest in being reasonably informed about catastrophic risks posed by AI. Second, there should be an explicit procedure by which the government can verify or contest redactions it disagrees with under that standard, however it is set. These tweaks are within state authority. They are consistent with best practice in other industries, and they would not overburden AI companies.

What Are Trade Secrets in the Context of AI?

When we think of AI secrets, we might think of specific algorithmic innovations and model weights. In reality, the legal definition is much broader than that. Virtually any information can qualify, so long as it satisfies two general requirements (with further nuance depending on which federal, state-level, or common law doctrine applies).

First, the information must in fact be a secret. It must be treated as a secret by the firm (for example, through security measures and confidentiality obligations), and it must not already be publicly known. Second, the information must be commercially valuable because of its secrecy. That is, a competitor’s gaining access to that information would undermine its competitive value.

 Beyond algorithmic secrets and model weights, there is much more information internal to an AI company that could satisfy these conditions. This includes information about architecture, training data, system prompts, compute, scientific breakthroughs, research ideas, technical and business strategies, knowledge of what does not work, and novel applications of known methods used in other industries. The very fact that a developer is pursuing or has achieved a particular capability could itself be a trade secret if it is treated as such and if revealing that fact would undermine the company’s competitive edge.

Indeed, the modern history of trade secret law is marked by increasingly broad claims of trade secrecy in the face of pressures to publish, including of chemicals used by fracking companies, ingredients used in cigarettes, workplace diversity data of publicly listed companies, and proprietary algorithms used by U.S. courts to inform sentencing decisions. The scope for potential trade secrets in the AI industry is similarly open ended.

The Immediate Risks of Exempting Trade Secrets From Disclosure

Given the breadth of information that trade secret protection could encompass, trade secret exceptions in transparency laws create two distinct risks.

First, there is a real risk that legitimate redactions obscure an SSP so that the public is unable to understand or scrutinize a model’s risk profile or the mitigation strategies that a company has put in place. For example, some state bills ask for specific details about a developer’s testing procedures for assessing and managing severe risk. Procedures, along with methods, techniques, and processes, are precisely the sort of thing that could qualify as a trade secret. In describing its testing procedures, a developer may sometimes be able to redact sensitive details to protect its trade secrets and still provide enough information about it to keep the public reasonably informed. Yet this may not always be possible. Suppose a developer creates a novel, highly efficient testing procedure, but it could be easily reverse-engineered by a competitor if only they knew its basic premise. In that case, the entire testing procedure may be redacted from a fully compliant disclosure.

The risk that redactions obscure important information from an SSP is heightened by uncertainty about what information is most important to the public. For example, a high-level description of a safety-testing procedure may sometimes be insufficient for communicating the risk an AI model poses. Sometimes, “deep access” to nuts and bolts information, including databases or model weights, may be required—even though those details are very likely core AI trade secrets.

The point is not to deny the status or value of such information. Rather, it is to acknowledge the possibility of a real tension between the legitimate interest of a business to exclude competitors from appropriating confidential information and the legitimate public interest in information necessary for it to understand, study, and respond to significant safety risks. The proposed laws should strike a reasonable balance between the two—but they do not. Instead, the current bills preemptively and decisively resolve any tension in favor of trade secrecy.

A second, immediate risk is that information will be redacted, not because it is unambiguously a trade secret, but because there is at least an argument that it is, and the company would rather withhold it for other, less valid, reasons. Given how broad the definition is, firms will often be able to argue that some information is a trade secret, even though it might not qualify for protection were the issue ever tested (for example, because the secrecy of that information is not sufficiently valuable). Often, trade secret status is confirmed only when the question is argued in court. Until then, firms have an incentive to treat potential trade secrets as if they qualify. In this case, that means redacting the information and asserting trade secrecy. The issue is that, without a way to test those claims, the law ends up treating developers’ redactions as presumptively valid.

What Safeguards Are in Place?

As drafted, each of the four bills contains weak checks and balances against these risks.

In each bill, companies making redactions must offer a description and justification to the extent possible without undermining the purpose of the redaction. This may provide some insight into missing information and offer some confidence that redactions are justified. However, there are no specific rules for what counts as a sufficient justification, giving firms considerable flexibility. 

In all states except California, the attorney general (and in New York’s case, the Division of Homeland Security) may request access to the unredacted version of the SSP. The possibility of government scrutiny may reduce the risk of overclaiming. However, there is no requirement that such a request be made, nor are there details about what the government can do with the unredacted SSP once it receives it.

The bills in Illinois and Michigan contain the most promising accountability mechanism. They would require developers to engage a “reputable third-party auditor” at least once a year. The auditor is tasked with assessing the developer’s compliance with its own SSP and, to a lesser extent, the SSP’s compliance with the legislation. As drafted, the latter entails an assessment of the developer’s compliance with the redaction rules. This also may reduce the likelihood of clearly unjustified claims. However, in situations where trade secrecy is ambiguous, it is unclear what effect this will have. Determining such claims would require evidence and argument, and it is doubtful that private auditors chosen by the developer, whose role is much broader than reviewing redactions, would have the interest or expertise to perform this function.

In any case, none of these measures address the more fundamental issue: that safety-critical information may indeed be protected by existing trade secret law, and so companies could redact such details while fully complying with their disclosure obligations.

Fixing the Transparency Laws

There are two key steps that states should take to improve their transparency laws: first, reprioritize the public interest, and second, increase accountability for trade secret claims.

An Exception to the Trade Secrets Exemption

The transparency laws should include a principled limit to permissible trade secret redactions in favor of the public interest. For example:

Information shall not be redacted on the basis of trade secrecy if the disclosure of that information is necessary to reasonably inform the public about the nature and extent of catastrophic risks posed by an AI model and/or the developer’s protocols for managing, assessing, and mitigating catastrophic risks.

Language like this would mean that parts of an SSP that are essential for understanding the risk posed by an AI model cannot be redacted solely because their secrecy is commercially beneficial to the developer. They would be redactable only on other grounds—for example, to mitigate a public safety or national security risk. At the same time, trade secrets that do not reach this threshold of criticality to the public could still be redacted.

The proposed clause targets the purpose that a public SSP serves. It uses the existing terminology: “Catastrophic risk” is clearly defined (or “critical risk” in the Michigan and Illinois bills, and “critical harm” in the New York bill), while “protocols for managing, assessing, and mitigating catastrophic risk” reflects the definition of an SSP.

A principled approach is desirable here. As noted, it is too difficult to say in advance exactly what information will be sufficiently safety-critical to warrant public disclosure. This approach allows for reasonable disagreement. That said, the law could supplement this general provision with specific details where they are known. For example, it could specify that certain information (e.g., model weights) continue to be exempt, or that other categories must be disclosed (e.g., “descriptions of techniques used for safety testing necessary to assess their adequacy”).

Adding this provision would not impose an unacceptable burden on developers. Unless a claim is contested, the clause would be triggered only when the developer itself concludes that some secret information is too indispensable to reasonably inform the public, that it cannot find an alternative way to make that disclosure while protecting its secret, and that it cannot justify a redaction on other grounds. This means the law would still defer, at least initially, to developers’ best position about which redactions are justified. Further, developers would continue to enjoy significant freedom to decide what information to include in an SSP in the first place and how to frame that information to protect their commercial interests.

Accountability for Trade Secrecy Claims

Transparency bills should also explicitly enable the state to contest redactions. It is relatively common that firms have to establish, against some possibility of challenge by the state, why their secrets qualify for protection from disclosure rules. For example, this is the case at the federal level in the context of hazardous chemicals and in several state-level fracking regulations. In the former case, the law also permits members of the public to petition for such a challenge.

The current AI regulations could include similar mechanisms. For example:

  1. The developer must submit its claimed redactions to the attorney general (AG) (currently, most bills allow the AG to request access) along with its justification for the claim.
  2. The AG can (or must) evaluate those claims against the applicable standard.
  3. Where the AG seeks to challenge a claim to trade secrecy, it gives the firm a chance to be heard and to provide alternative ways to make a satisfactory disclosure.
  4. The resulting updated SSP is released.

This process could be modified to be more or less deferential to companies’ claims. For example, the burden to justify a claimed redaction could be set at different levels, reviews of claimed redactions could be discretionary or mandatory, and an independent administrative body could conduct those reviews.

Some version of this second proposal is necessary to address the risk of overly broad redactions going unchallenged. Without it, adding an exception for safety-critical information may not make a meaningful difference to the redactions that companies consider justified. Still, the first proposal remains the most important to include, even without the inclusion of additional accountability measures. This is because it serves another purpose: setting appropriate expectations about how the government will treat AI trade secrets in the future.

Setting Expectations Early

Though they often do not use it, states have inherent authority to regulate trade secrets to protect fair dealings in the marketplace and public safety. For example, California’s Public Resources Code exempts certain “[h]ealth and safety data” from trade secrecy, and Pennsylvania’s regulation of oil and other storage tanks can require disclosure of “public health, safety, welfare or the environment” even if it is a trade secret. Further, a recently proposed federal AI liability law would require any developer who wishes to benefit from its safe harbor provision to make a public disclosure, and redactions of trade secrets are allowed only if they are “unrelated to the safety of the artificial intelligence product.”

Nevertheless, qualifying trade secret protection in these ways raises several legal questions. The most relevant of these is the possible application of the Fifth Amendment’s Takings Clause. Other hurdles, including preemption, compelled speech, and due process, are less likely to pose an issue: The most relevant federal legislation does not preempt state trade secrets law; compelled speech concerns can be met by limiting requirements to factual, non-ideological commercial disclosures reasonably related to safety; and due process concerns can be mitigated by clear standards, notice, and an opportunity to be heard.

The Takings Clause is relevant because trade secrets are currently treated as a form of private property. A disclosure law that forces a company to reveal its trade secrets (thereby destroying the information’s competitive value) can be an unconstitutional taking in specific contexts. This can be navigated in different ways. For example, rather than “forcing” disclosure, laws can frame transparency as a choice developers can make voluntarily in exchange for a benefit offered by the state, such as receiving a registration, safe harbor protection, or (arguably) participating in a particular market. Further, even if disclosure is a taking, this is not necessarily fatal if states offer a means to seek compensation.

Most urgently, whether a takings challenge can succeed will likely turn on whether the developer had a “reasonable investment-backed expectation” that its information would not be disclosable.

This was made clear in the leading case of Ruckelshaus v. Monsanto. There, the Supreme Court struck down a part of a law that would have allowed the Environmental Protection Agency (EPA) to publicize details about Monsanto’s pesticide products “to protect an unreasonable risk of injury to health or the environment.” Before the law was introduced, Monsanto had provided the data in question to the EPA under assurances that it would not be publicly disclosed. The Supreme Court ruled that the EPA could not go back on its word. Those earlier assurances had established a reasonable investment-backed expectation of secrecy, and accordingly, the law constituted an unlawful taking. In contrast, for data provided by Monsanto after the new law was passed, when secrecy was not promised, the very same disclosure law did not violate the Takings Clause.

Lower courts have sometimes construed Ruckelshaus very broadly, and they have inferred a “reasonable investment-backed expectation” where no explicit promise had been made. In 2002, the U.S. Court of Appeals for the First Circuit invalidated a law that would have let the state publish tobacco companies’ product ingredients where doing so “could reduce risks to public health.” It found Philip Morris had a reasonable expectation of secrecy, not because of any explicit promise, but because the legal tradition in America had historically allowed tobacco companies to keep this information secret.

This is why it is so important to include a principled public-interest exception to the trade secrets exemption now, at the outset. AI law is still in its infancy, and it is unclear what expectations are reasonable. States have a valuable opportunity to establish appropriate standards proactively. That opportunity comes with the corollary risk: that by refusing to interfere with companies’ trade secret claims, these laws set an expectation that AI companies will continue to enjoy strong trade secrets rights in the future. That the present bills may do so is ironic, given that the report motivating one of them cites the lack of transparency in the tobacco industry as a failure not to be repeated.

***

The balance between transparency and secrecy will have a significant impact on the development of AI and the laws that regulate it. Trade secret law is one legal instrument that, if underestimated, will shape what is known and unknown and by whom. But it will not necessarily do so in the service of the public interest of safe and secure AI. While competition is one reason to protect secrecy, it should not determine the limits of sensible policy. In the context of mitigating catastrophic risk, a reasonable compromise is necessary.


Julius Hattingh is a J.S.D. candidate at Yale Law School. Julius was a Summer Research Fellow at the Institute for Law & AI in 2025.
}

Subscribe to Lawfare