Latest NDAA Supports AI Safety, Innovation, and China Decoupling
This year’s defense bill made many significant changes to U.S. AI policy.
Since the release of ChatGPT in late 2022, most successful federal lawmaking on artificial intelligence (AI) has occurred within the annual defense bill. This year’s National Defense Authorization Act (NDAA) was no exception. Enacted in December 2025, the bill contains a title devoted to AI and other emerging technologies, as well as numerous AI-related provisions scattered throughout its 1,259 pages of text. Collectively, these provisions will significantly reshape how America approaches AI innovation, AI safety, and U.S.-China competition.
Innovation and Adoption
The latest NDAA repeatedly emphasizes the importance of using AI in the U.S. military and defense industrial base. Section 350 directs the Army, Navy, and Air Force to establish pilot programs that will use AI to improve ground vehicle maintenance. Section 1534 establishes a task force to help develop and deploy computing environments across the Defense Department to support AI activities. Section 6602 directs the intelligence community to promote the sharing of AI systems that “have the greatest potential for re-use without significant modification” across the intelligence community. Section 225 directs the Army to expand robotic automation at certain munitions manufacturing facilities. Other uses of AI extend to back-office functions like supply chain monitoring (Section 1019), financial auditing (Section 1007), and logistics (Section 347).
Another set of provisions enact structural reforms to help the military quickly build, acquire, and deploy new technologies. Section 218 establishes an alternative test and evaluation pathway to “enhance agility” for all new software-related acquisition programs. Section 902 expands the responsibilities of the Defense Department’s chief technology officer, including a duty to “support[] the rapid transition of technology from the research and development phase into operational use.” Section 907 tasks the Defense Science Board with recommending potential reorganizations to the Office of the Secretary of Defense to “maximize the output of digital solutions engineering and software delivery activities” across the Defense Department, potentially including the creation of a new defense agency.
Notably, the executive branch shares Congress’s focus on military AI and tech advancement. For example, several executive orders prompted sweeping reforms to the Federal Acquisition Regulation and its Defense Department supplement, seeking to bring greater speed and agility to defense procurement. In November, the Pentagon’s Acquisition Transformation Strategy billed itself as an “aggressive” effort to “[f]ield technology and modernize systems at a rate that outpaces our adversaries.” The next month, Secretary of Defense Pete Hegseth announced the launch of GenAI.mil, a bespoke platform focused on equipping the U.S. military with generative AI capabilities from companies like Google and Elon Musk’s xAI. Hegseth later declared that “very soon, we will have the world’s leading AI models on every unclassified and classified network throughout our department.” He also released an “AI Acceleration Strategy,” which centers on converting the U.S. military into an “‘AI-first’ warfighting force.” Between actions like these and the NDAA’s AI focus, it’s safe to assume that AI advancement will be a top priority for the Defense Department in 2026.
Safety, Security, and Oversight
The new NDAA pairs its AI innovation provisions with numerous safety-related provisions. Section 1061 increases congressional oversight over the Defense Department’s use of autonomous weapons, including visibility into the department’s legal analysis. Section 6602 requires each intelligence community element to track the “efficacy, safety, fairness, transparency, accountability, appropriateness, lawfulness, and trustworthiness” of its procured and in-house AI systems, while Section 6603 directs the intelligence community to establish risk-based testing standards for common AI use cases. Section 1533 directs the Defense Department to create an AI assessment framework that includes testing procedures, security requirements, and “compliance with ethical principles.” The department will use this framework to assess all its major AI systems.
Unlike the NDAA’s provisions for AI advancement, these somewhat cautious provisions arguably diverge from the Defense Department’s current AI approach, which is more risk-tolerant. The department’s AI Acceleration Strategy insists that “[w]e must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.” It also calls for approaching risk trade-offs “as if we were at war,” and it establishes a corresponding “‘Barrier Removal Board’” that will, in the words of Hegseth, target “anything that slows down the acceleration of AI capabilities.” Thus, the AI assessment framework’s implementation might end up being less safety-oriented than its statutory text suggests.
One safety-related domain where the Pentagon and Congress appear to align is the intersection of AI and cybersecurity. Hegseth explained in January that “objectively truthful AI capabilities employed securely” are a core part of how the Defense Department defines “responsible AI.” Similarly, Section 1512 of the NDAA directs the department to develop and implement a department-wide policy for the cybersecurity of AI models and applications, while Section 1513 orders the development of cyber and physical security standards to mitigate risks to the department from its adoption of AI technologies—notably, these standards will become requirements for all relevant procurement contracts. Section 6601 tasks the National Security Agency’s AI Security Center—which was given statutory footing in last year’s NDAA—with developing security guidance to protect advanced AI technologies from theft or sabotage by nation-state adversaries.
While many provisions focus on current AI capabilities, the NDAA also considers how future AI capabilities could create risks and opportunities for national security. Specifically, Section 1535 establishes an “Artificial Intelligence Futures Steering Committee” composed of senior defense officials to forecast AI development, assess adversaries’ trajectories, analyze the threat landscape, formulate risk mitigation policies, and generally plan for advanced AI systems such as artificial general intelligence (AGI). Though undefined in the statutory text, AGI is commonly understood as a highly advanced form of AI that could automate most job functions that require human thinking today. Indeed, an earlier version of the NDAA defined it as AI-capable systems with “the potential to match or exceed human intelligence across most cognitive tasks.”
The NDAA also includes the Safer Skies Act, which authorizes state, local, tribal, and territorial law enforcement—after completing training and certification—to take certain defensive actions against credible drone threats to people, facilities, assets, prisons, jails, critical infrastructure, and venues for large public gatherings. These new powers could grow even more important over time if AI advancements continue strengthening the autonomous capabilities of drones.
Tech Decoupling From China
AI-related technologies have long been viewed as central to the U.S. and China disentangling their civilian and military supply chains, as the U.S. has for years imposed various export controls relevant to computing hardware. This year’s NDAA continues the trend toward decoupling.
The most straightforward example is Section 6604, which effectively bans DeepSeek’s namesake application across national security systems in the intelligence community. Meanwhile, Section 1532 enacts a similar prohibition across the Defense Department, with a broader scope covering all AI models from DeepSeek and from companies linked to DeepSeek’s funder. Section 1532 also directs the secretary of defense to consider issuing guidance to restrict the department’s use of AI systems from companies based in an adversary nation, companies subject to “unmitigated” influence by an adversary nation, companies supporting the Chinese military, and companies on the Consolidated Screening List.
Another example of AI-related decoupling is Section 842, which restricts Defense Department procurement of advanced batteries whose functional cell components or technology are owned, sourced, refined, or produced by a “foreign entity of concern”—a term grounded in a definition in the Infrastructure Investment and Jobs Act and a list in an earlier NDAA that included major Chinese companies such as CATL and BYD. There are some exceptions, and the restrictions take effect in phases beginning in 2028, 2029, and 2031. Since the restrictions extend to batteries “embedded within warfighting and support systems,” Section 842 could reshape the U.S. military’s adoption of AI-enabled robots and drones, which in many cases rely on batteries from Chinese supply chains.
The NDAA’s most significant decoupling measure may be the Comprehensive Outbound Investment National Security (COINS) Act. The bulk of this act establishes rules restricting the flow of capital from the U.S. into sensitive technologies, such as AI, semiconductors, and high-performance computing, with links to “countries of concern,” including China. This connects to President Biden’s 2023 executive order, which ordered the creation of China-focused restrictions on outbound investment by invoking the International Emergency Economic Powers Act. The Treasury Department’s subsequent implementing regulations, which remain in place, prohibit or require notification of certain investments in AI systems trained above specific computational thresholds or intended for use cases of concern. The Treasury Department now has clear statutory authority to modify these regulations, and it is also expected to engage with allied countries to help them develop similar programs. For the department to carry out its duties effectively, the COINS Act authorizes $150 million in annual funding for two years—approximately nine times more per year than the department requested in early 2024 to establish an outbound investment review program.
The NDAA adds to a growing trend of AI-related policies aimed at decoupling from China. In April 2025, the Department of Justice issued guidance for complying with its Data Security Program, which seeks to prevent foreign adversaries from using bulk sensitive personal data and U.S. government-related data to “develop AI and military capabilities,” among other things. Months later, the One Big Beautiful Bill restricted tax credits on energy infrastructure for projects linked to foreign entities of concern, which could indirectly affect how America powers AI data centers. More recently, the Federal Communications Commission imposed import restrictions on new models of foreign drones, many of which come from China. However, the Trump administration’s evolving efforts to approve high-performance AI chip sales to China remain a notable exception to the decoupling trend.
* * *
For the time being, most significant AI lawmaking occurs annually in the NDAA. When buried amid 1,259 pages of legislation, the NDAA’s AI provisions may receive less attention in news cycles, but they are nevertheless significant—with implications for the future of AI capabilities, security, and geopolitics. They appear to illustrate a U.S. approach to AI that emphasizes both innovation and safety, while building independence from China.
