Cybersecurity & Tech

Narrowing the National Security Exception to Federal AI Guardrails

Amos Toh
Thursday, June 26, 2025, 8:00 AM

Fostering public trust in how the government uses AI to protect national security requires robust and enforceable rules on how it is authorized, tested, disclosed, and overseen.

Eisenhower Executive Office Building (Wayne Hsieh, https://shorturl.at/OK7oV; CC BY-NC 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Immediately upon taking office, President Trump replaced the Biden administration’s executive order on artificial intelligence (AI) with his own, directing agencies to roll back any regulation or policy that poses “barriers to American AI innovation.” But this deregulatory push has taken an unexpected turn. In April, the White House released a pair of memoranda on using and acquiring AI across the federal government, which many had feared would gut Biden-era safeguards seeking to ensure that the technology is safe, effective, and trustworthy. But the memos uphold many of these safeguards, recognizing that AI innovation cannot come “at the expense of the American people or any violations of their trust.” This commitment to public trust should also lead the administration to level up the rules governing national security applications of the technology, which lag far behind the recently released memos. Congress should also pass legislation codifying safeguards and provide mechanisms for enforcement and oversight.   

Issued by the Office of Management and Budget (OMB), the April 2025 memos, which replaced 2024 guidance from the Biden administration, direct agencies to prioritize and fast track integrating AI into government services—a marked change of pace from the 2024 ones, which largely encouraged caution over speed in experimenting with the technology. The more bullish approach outlined in the 2025 memos coincides with the Trump administration’s push to remake the federal bureaucracy with AI, from identifying layoffs, to monitoring the communications of workers and targeting government contracts for cuts.

Differences aside, there are important similarities between the 2024 and 2025 memos that serve as a reminder that the drive to promote innovation is not a license to abandon guardrails. The 2025 memo outlining the main risk management practices retains many of those established by its predecessor: For example, it requires agencies to complete AI impact assessments, conduct pre-deployment testing, incorporate public feedback, and provide meaningful opportunities to appeal negative impacts. Agencies must also continue to compile and update inventories of use cases—a key transparency measure initiated by the first Trump administration. The 2025 memo even expands the criteria that impact assessments should evaluate, requiring agencies to critically examine whether the data used to train and operate an AI system is fit for purpose, and whether AI use will result in any cost savings. 

Still, there are troubling exclusions and omissions. The 2025 OMB memo no longer requires agencies to refrain from using AI if the risks outweigh the benefits—a commonsense measure that agencies should still implement on their own. OMB has also scrubbed all references to measuring and mitigating bias, including the need to account for AI’s impact on minorities and other underserved communities. This deletion, coupled with the Trump administration’s ongoing purge of federal diversity, equity, and inclusion (DEI) initiatives, will chill much-needed efforts to evaluate these impacts. The 2025 memo also limits risk monitoring and mitigation to impacts on the “privacy, civil rights, and civil liberties of the public” as well as “unlawful discrimination.” The administration’s attempt to exclude disparate impact liability from the latter will sideline efforts by agencies to grapple with AI-facilitated abuses for which there is no clear discriminatory intent, such as gender or racial disparities in the accuracy of facial recognition systems. The 2025 memo also maintains problematic loopholes created by the 2024 one—most notably, the broad discretion afforded to agencies to waive risk management practices when they “increase[] risks to safety or rights overall” or “create an unacceptable impediment to critical agency operations.”  

These drawbacks notwithstanding, the 2025 OMB memo reflects consensus on at least some baseline standards to ensure the government adopts AI responsibly and safely. Both the Trump administration and Congress should expand on this promising start and bring much-needed clarity and consistency to how these rules apply to national security uses of the technology.

Under the Biden administration, AI regulation proceeded along two tracks: one set of rules created by OMB for most government uses, from hiring software to crime forecasting and benefits fraud detection, and far weaker constraints on “national security systems” that are laid out separately in a National Security Memorandum (NSM) and an accompanying AI governance and risk management framework. As of this writing, the Trump administration is still reviewing how to update the NSM. This is an opportunity to turn the page on Biden’s two-tiered approach and align the NSM with the 2025 OMB memo to the greatest extent practicable. 

Discrepancies between both sets of standards could lead to the adoption of AI systems that undermine both our national security and the public trust. When agencies conduct impact assessments, for example, the 2025 OMB memo requires them to support their claims about AI’s intended purpose and benefits with “specific metrics or qualitative analysis,” while the NSM makes this optional. Agencies that opt out of providing evidence may deploy risky and unproven AI systems without rigorous justification of their utility.

Transparency about national security AI is also sorely lacking. Agencies operating under the 2025 OMB memo must publish their use-case inventories and summaries of their reasons for granting waivers of risk management practices. In contrast, the NSM requires agencies to maintain use-case inventories, but there is no duty to publish. It also requires agencies to disclose the number of waivers they have granted, but not their justification. This lack of transparency withholds from the public even the most basic information about how the government uses AI to protect national security, and the extent to which it complies with safeguards. 

When it comes to remedying AI’s harms, the NSM is a missed opportunity. The 2025 OMB memo instructs agencies to grant “individuals affected by AI-enabled decisions” access to “timely human review” and a “chance to appeal any negative impacts,” but there is no comparable guidance under the NSM. While neither memo addresses the challenges of designing effective remedies, such as the impracticality of requiring human review for a large volume of automated decisions, the OMB memo at least provides a starting point.

Flaws in the NSM’s risk management practices are exacerbated by the broad discretion it grants to agencies to waive compliance—one area of weakness it shares with the 2025 OMB memo. The NSM authorizes waivers on various grounds, such as when compliance would unacceptably impede operations (a similar provision is found in the 2025 OMB memo). Both sets of rules permit agencies to reauthorize waivers on an annual basis. In effect, agencies can indefinitely skirt rules ensuring that AI systems critical to our national security are in working order, and tested and proven, on grounds far broader than specific and serious national security concerns. 

It is understandable for the government to tailor risk management measures to operational considerations, such as the need to maintain the secrecy of covert action, or the speed at which military and intelligence operations typically unfold. But the NSM’s retrenchment of guardrails is a step too far. Providing evidence of AI’s benefits and cost savings, for example, should be the minimum required of any agency, whether pre- or post-deployment. And while protecting intelligence sources and methods may counsel against the disclosure of use-case inventories in their entirety, this should not preclude the release of unclassified summaries. Nor should this concern prevent Congress from requiring agencies to report this information to committees with jurisdiction over national security AI, such as the intelligence and armed services committees. 

The broad waiver process created by the NSM and the 2025 memo also undermines meaningful compliance with their safeguards. As my Brennan Center colleagues Spencer Reynolds and Faiza Patel have urged, waivers should be time-limited and granted only when there is a concrete plan to bring the AI system into compliance. Even if an agency has to quickly deploy the system to counter an imminent threat, it should be routine practice after the exigency has passed to evaluate whether it worked as promised or created unanticipated risks. Congress, if not the public, should also be made aware of how the technology performed. Compelling agencies to grapple with the real-world benefits and limits of how they use AI not only mitigates the risk of abuse but also helps them adapt future deployments to serve as more effective aids in national security operations.    

The NSM’s silence on remedying AI-facilitated harms is also a blow to accountability for AI-facilitated abuses of privacy, civil liberties, and civil rights. Sentiment analysis tools used to parse social media content, for example, may misinterpret innocuous online activity (such as non-English satire or humor) as indicators of threatening behavior, contributing to inaccuracies in intelligence gathering that lead border agents to deny travelers entry into the country. How should the government notify affected travelers of their error and provide a meaningful opportunity to appeal without disclosing sensitive information about how the tool works? If the sentiment analysis was one of several errors contributing to the decision to deny entry, how should responsibility be apportioned? These are some of the questions that the Trump administration should grapple with in its ongoing review of the NSM. They also require careful study by Congress, which bears ultimate responsibility for enacting comprehensive AI regulation.

Further complicating matters, the intelligence community has exempted certain uses of publicly available AI from key measures it has taken to implement the NSM. Intelligence Community Directive 505 (ICD 505), adopted days before President Trump’s inauguration, requires intelligence agencies to develop procedures for mitigating unintended bias in AI outputs, preserving the privacy of personal information of U.S. citizens, and reporting AI misuse. The directive also creates a centralized registry for tracking how AI models are used throughout the intelligence community. However, none of these requirements apply to AI that is publicly available “without charge,” provided that agencies have “accessed it on the same terms as are available to the public” and did not modify the technology. 

This appears designed to exempt intelligence community uses of free versions of commercial AI, such as chatbots and translation tools. But the NSM does not provide for such a carve-out—and for good reason, since these uses raise risks to both national security and individual rights. Inaccuracies in AI-generated analysis of social media feeds and other public sources of information, for example, may lead analysts to overlook genuine indicators of illicit activity, while misidentifying certain individuals or groups as security threats. 

This inconsistency reinforces the need for legislation that establishes how AI systems for national security should be properly authorized, evaluated, disclosed, and overseen. Short of enacting comprehensive regulation, however, Congress should step up its scrutiny of how taxpayer dollars are spent on acquiring and developing these systems. The annual process for funding intelligence and defense spending provides an opening to mandate greater transparency and accountability, such as regular reporting from agencies on how they are implementing risk management practices, and submission of their use-case inventories. 

The continuity between the 2024 and 2025 OMB memos on risk management signals important areas of agreement on monitoring and mitigating many (though certainly not all) of AI’s risks. All the more reason, then, to ensure that national security considerations are not a blank check to bypass these safeguards. In its review of the NSM, the White House should seek to bridge the gap between both sets of rules. But executive branch regulation, while necessary, is insufficient. The responsibility for holding government AI accountable to the public interest also lies with Congress, which should establish clear and robust rules on its adoption and use. 


Amos Toh is a researcher and lawyer focused on the role of technology in abuses of economic power. He is currently senior counsel in the Brennan Center for Justice’s Liberty and National Security Program, where he examines how the business of military AI is reshaping the conduct of war.
}

Subscribe to Lawfare