Cybersecurity & Tech Executive Branch

How U.S. Export Controls Risk Undermining Biosecurity

Doni Bloomfield, Joe Khawam, Tim Schnabel
Tuesday, December 2, 2025, 10:02 AM
The regulations intended to prevent bioweapons proliferation may be increasing bioweapon risks.
DNA, https://tinyurl.com/5afyvpuv; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/deed.en

Published by The Lawfare Institute
in Cooperation With
Brookings

Major artificial intelligence (AI) labs have begun to publicly acknowledge that their models possess advanced—and worrying—biological capabilities. Between June and August, OpenAI, Anthropic, Google, and xAI all disclosed that their advanced models showed biological expertise frequently exceeding that of human experts. This knowledge could help laboratories develop new therapies but also risks enabling malicious actors seeking to design novel biological weapons. Frontier models likely haven’t yet achieved critical weaponization skills, but that moment may be fast approaching.

Recognizing this danger, major U.S. AI labs evaluate their models to ensure they can’t be used to design or deploy weapons of mass destruction. However, these evaluations themselves risk violating U.S. export control laws—even if testing is conducted entirely within the United States by American companies working in good faith. This regulatory paradox threatens national security at a critical moment in AI development. The executive branch should promptly clarify how existing export control tools can facilitate biosecurity evaluations while pursuing targeted regulatory changes to ensure that compliance supports rather than impedes safety testing.

The Export Control Trap

AI biosecurity evaluations face an unexpected obstacle in U.S. export controls because many of the world’s top biosecurity experts are foreign nationals from countries such as Canada, the U.K., and EU member states. For example, a recent paper in Science that exposed and helped patch AI-enabled vulnerabilities in nucleic acid synthesis screening was co-authored by experts at institutions in the U.S., Switzerland, and the U.K. Under U.S. law, sharing certain biological information with foreign experts, even when they’re sitting in offices in San Francisco, can constitute a “deemed export” requiring government authorization. Securing governmental authorizations for such exports can take a month or more, but competitive pressures push AI labs to complete evaluations in a matter of weeks. This timing mismatch forces American AI companies into a difficult choice: delay product releases to seek export licenses while competitors forge ahead, limit testing to U.S. citizens only and sacrifice evaluation quality, or risk violating export controls and exposing themselves to significant civil and criminal penalties. Our recent white paper discusses these issues in greater technical and legal depth.

The Regulatory Framework

Two sets of regulations govern the export of sensitive biological information from the United States, each administered by different agencies with distinct approaches and timelines.

The Export Administration Regulations

The Export Administration Regulations (EAR), administered by the Commerce Department’s Bureau of Industry and Security (BIS), was originally designed to control the export of dual-use technologies—items with both civilian and military applications. The EAR casts a broad net, catching technologies that could be misused but aren’t inherently military in nature—for example, advanced AI chips, specialized radios, and fracking equipment. For AI biosecurity evaluations, the EAR becomes relevant when AI testing involves information about pathogens, toxins, or the technical knowledge needed to produce them.

For example, the EAR regulates the export of proprietary information about how to produce pathogens such as Nipah virus or the 1918 strain of pandemic influenza, as well as toxins such as ricin. The EAR refers to this sort of information as “technology,” which includes information necessary for the development, production, or use of controlled items such as dangerous pathogens or toxins.

These rules could require firms to secure a license before transmitting information about pathogen modification techniques, toxin synthesis pathways, or detailed methodologies for weaponizing biological agents to non-U.S. persons. When an AI model generates this type of information during testing, or when evaluators document such capabilities, sharing these results with a foreign team member may require BIS authorization.

The International Traffic in Arms Regulations

The International Traffic in Arms Regulations (ITAR), overseen by the State Department’s Directorate of Defense Trade Controls (DDTC), takes an even stricter approach. While the EAR casts a wide net for dual-use items, the ITAR focuses specifically on military technologies. The ITAR framework was designed for traditional defense articles—such as fighter jets and missile systems—and also covers biological agents specifically designed or modified for military purposes.

The ITAR controls the biological agents and related “technical data,” which is the ITAR-equivalent term for EAR “technology.” The ITAR’s definition of “technical data” is broad, encompassing any information required for the design, development, production, or operation of defense articles. Unlike the EAR, which offers various license exceptions and pathways for exports to allied nations, the ITAR requires government authorization for virtually any export of controlled technical data to foreign persons, with only narrow exemptions that generally don’t apply to biological agents.

This creates particular challenges for AI evaluations. Even informal discussions with foreign nationals about how a model could help weaponize anthrax or produce militarized biological agents could constitute an ITAR-controlled export requiring government authorization.

“Published” and “Public Domain” Information

To be clear, these controls don’t cover all biological information. Both the EAR and the ITAR carve out certain information available to the public. The EAR, for example, excludes “published” information and “fundamental research,” while the ITAR similarly excludes “public domain” information and general scientific principles. However, both regulations require that such information meet specific criteria to qualify for these exclusions, including being generally available to the public. That means a company’s proprietary information and confidential test results generally wouldn’t qualify. Moreover, even when AI models are trained entirely on public datasets, they can generate novel combinations and insights beyond any individual source that may constitute new controlled information—precisely the capability that makes them potentially dangerous and necessitates rigorous testing.

The “Deemed Export” Rule

Both regulatory frameworks include a “deemed export” rule that may seem counterintuitive. Under this rule, releasing controlled information to a foreign national within the United States is treated as an export to that person’s country of nationality. An American citizen, chatting with a Canadian in a Manhattan office, sharing AI prompts that include controlled biological information? That’s treated as an export to Canada. A U.S. red-teamer in Washington, D.C., sending evaluation results to a colleague in Palo Alto, California, who happens to be a U.K. citizen? That’s treated as an export to the U.K.

How AI Biosecurity Evaluations Work

To understand why export controls create challenges for biosecurity evaluations, it’s essential to understand what such evaluations entail. Evaluation teams—typically composed of biosecurity experts and adversarial-evaluation specialists (red-teamers)—systematically probe AI models to assess whether the models could meaningfully assist in each stage of the biological weapons development pathway.

For example, as described by Anthropic, evaluators will build multistep agentic challenges prompting the model to design pathogen sequences and draft matching lab protocols to complete realistic acquisition workflows. Evaluators will also craft short-answer questions to test models’ conceptual knowledge about biological weapons; prompt models to engineer harmless organisms as a proxy for novel biothreat creativity; and engage models in extended conversations about bioweapon ideation and design to assess whether they could make attackers more skillful. Thoroughness is critical, as incomplete evaluations could result in deploying models with unidentified dangerous capabilities. Evaluators document not just whether the model provides dangerous information but also how easily safeguards can be circumvented through prompt engineering or “jailbreaking” techniques.

Major AI labs don’t rely solely on internal teams of evaluators. They regularly engage third-party evaluators such as Deloitte Consulting and Nemesys Insights, as well as government partners like the U.K. AI Safety Institute. This small ecosystem of evaluators frequently includes foreign nationals who bring critical expertise—and this is precisely where export control complications arise.

Intersection of Biosecurity Evaluations and Export Controls

The intersection of biosecurity evaluations and export control frameworks creates significant regulatory complexity. Testing whether an AI model could assist bioweapons development produces artifacts that may fall under both the EAR and ITAR frameworks—and sharing these artifacts with foreign nationals involved in the evaluation process can trigger export controls at multiple points. Whether a Canadian researcher at an AI lab designs prompts to elicit synthesis protocols for ricin (potentially EAR controlled), a U.S. third-party evaluator shares testing protocols with a British AI lab employee that reveal methods to weaponize anthrax spores (potentially ITAR controlled), or an EU academic at a U.S. university analyzes evaluation reports documenting dangerous capabilities (potentially controlled under either regime), each interaction may constitute an export. A single evaluation cycle for an AI model might therefore require multiple, time-consuming export authorizations for different foreign nationals involved, from different agencies, each with different criteria and processing times for authorizations. If evaluations reveal the need to improve the safeguards on a model, the process of developing those mitigations may also involve transfers of information that could implicate export controls. For an industry where product cycles are measured in months, not years, and where international expertise is important for comprehensive evaluations, this regulatory complexity poses serious challenges.

Compliance Pathways Within Existing Frameworks

Despite these challenges, AI labs and their third-party evaluators have some pathways toward compliance within current regulations, though each has limitations.

EAR Compliance Options

For EAR-controlled information, AI labs and their third-party evaluators have more flexibility than under the ITAR. The regulations provide “no-license required” designations for exports of certain biological technologies to Australia Group member countries—a 43-nation consortium including most Western allies committed to preventing chemical and biological proliferation. This covers many scenarios and may be sufficient for labs whose foreign team members come primarily from these nations.

When broader access is needed—such as for working with a specialist from a non-Australia Group nation—labs might qualify for specific license exceptions. For instance, License Exception GOV permits exports of controlled technology to a foreign ministry, agency, or international organization that is working with the United States or an allied government on counterproliferation or biodefense programs, for that government’s official use rather than commercial use. However, these exceptions apply only when specific regulatory conditions are met, and most scenarios where the “no-license required” designations don’t apply—particularly as to non-Australia Group nationals—will require authorization from BIS. While initial licensing can take a month or more, BIS can issue multiyear authorizations that permit repeated transfers to specified foreign parties without requiring separate licenses for each disclosure.

ITAR Compliance Options

For ITAR-controlled technical data, sharing controlled data is even more challenging. Unlike the EAR’s various pathways for license-free transactions, the ITAR requires DDTC authorization for virtually all exports of controlled technical data to foreign persons. While some exemptions exist for certain allied nations (Canada, the U.K., and Australia), controlled information about biological agents is excluded from these carve-outs. This leaves AI labs and their third-party evaluators with the stark choice of either restricting access to technical data exclusively to U.S. persons—which may limit the available expertise pool—or otherwise pursuing authorization from DDTC. While initial authorization can take a month or more, DDTC, like BIS, can issue multiyear authorizations for ongoing transfers of specified technical data to named foreign parties without requiring separate licenses for each disclosure.

Additional Regulatory Complications

Beyond the export challenges, the EAR adds yet another layer of complexity through its controls on U.S. persons’ activities anywhere in the world. Under these provisions, U.S. persons are prohibited from supporting certain activities without authorization, regardless of location. This prohibition includes any service that a U.S. person knows “may assist or benefit” the design, development, or production of biological weapons—whether in the United States or anywhere else in the world.

This creates uncertainty for AI researchers. Say a U.S. citizen on an AI safety team successfully prompted a frontier model to describe steps toward genetically modifying pathogens for weapons development. Might that violate the U.S. person prohibitions in the EAR? The answer should be no—creating or testing AI systems for evaluation purposes is fundamentally different from actually developing biological weapons, and good-faith efforts to prevent AI misuse should not be considered activities that “assist or benefit” bioweapons production. But without explicit guidance from BIS, this gray area could chill legitimate biosecurity work by U.S. researchers. Researchers might worry that by attempting to elicit an AI model to reveal its biological-weapons abilities, they are violating U.S. law—and risking jail time—by “assist[ing]” in the “design” of biological weapons.

The Stakes for National Security

This regulatory friction comes at a critical moment. Unlike traditional dual-use technologies that might take years to proliferate through physical supply chains, AI models can reach global users instantly upon release. A major model released without adequate biosecurity evaluation creates profound risks—such as lowering barriers to non-state actors producing or deploying biological weapons. Yet current export control frameworks may inadvertently incentivize AI labs to accept some of these risks.

When compliance becomes too complex or time consuming, companies face enormous pressure to find workarounds—whether by limiting evaluations to U.S. persons only (reducing effectiveness) or, worse, by reducing the scope of biosecurity testing altogether. These challenges are compounded by competitive dynamics within the AI industry. When American companies face regulatory delays while competitors from jurisdictions without such legal hurdles can move faster, the pressure to accelerate or abbreviate evaluations intensifies.

In an Oct. 27 submission to the White House, OpenAI cited export controls as among the “regulatory roadblocks to safety testing,” requesting that the government provide general authorizations for AI companies to conduct testing for chemical, biological, radiological, and nuclear risks without needing to apply for individual licenses. This public acknowledgment from a major developer underscores that the export control obstacles documented in this article appear to pose genuine operational challenges to AI safety work.

The result is a troubling irony. The regulations intended to prevent bioweapons proliferation may actually be increasing bioweapon risks.

Countervailing Forces

Although export control laws today make biosecurity evaluations challenging, we don’t mean to portray them as only a roadblock. They can also play a useful role in incentivizing firms to test their models’ capabilities. If an AI model released to the public were to transmit controlled biological information to foreign citizens, that could be a deemed export for which the AI developer or deployer is responsible. Taking that possibility seriously should make AI companies more careful about evaluating their models, so long as they can do so without running afoul of export control laws. Export control agencies should make it easier for AI developers to evaluate these capabilities, while also striving to sharpen firms’ interest in doing so.

Recommendations

BIS and DDTC should take urgent action to resolve the export control dilemma. Two specific steps could significantly improve the situation without compromising export control objectives.

The first step is to issue clarifying guidance. Both agencies should issue clear, practical, and narrowly tailored guidelines specifically addressing AI biosecurity evaluations. This guidance should confirm that bona fide biosecurity evaluations align with U.S. national security interests; clarify how developers and evaluators can select and use the correct licensing mechanisms under the EAR and ITAR for AI biosecurity evaluations, including longer-term licensing approaches; establish expedited review procedures with guaranteed response times for evaluation-related license requests; and explicitly confirm that conducting bona fide AI biosecurity evaluations does not require a license to comport with the EAR’s U.S. person controls.

The second step is to create targeted regulatory safe harbors: The agencies should also create narrow regulatory exceptions specifically for AI biosecurity evaluations and mitigations. An EAR license exception and a parallel ITAR exemption would allow approved organizations to conduct essential biosecurity testing with foreign specialists without case-by-case licensing delays. Such exceptions would require careful boundaries to prevent abuse. Qualifying criteria might include limitations to foreign persons from specified low-risk jurisdictions, requirements for organizational biosecurity protocols and personnel screening, mandatory retention of evaluation records for government review, and periodic audits to ensure compliance with safe harbor conditions.

Importantly, while addressing biosecurity evaluations, BIS and DDTC should address similar export control challenges affecting evaluations for chemical, radiological, and nuclear risks—which are usually evaluated at the same time as biological risks. A comprehensive approach addressing the full spectrum of AI chemical, biological, radiological, and nuclear evaluations would provide clarity across the industry and prevent the need for piecemeal regulatory fixes if new evaluation risks become as urgent as the biosecurity risk.

***

The tension between export controls and AI biosecurity evaluations exemplifies a broader challenge in applying 20th-century regulatory frameworks to 21st-century technologies. The current situation serves no one’s interests. It doesn’t enhance nonproliferation, doesn’t promote the safety of AI models, and doesn’t support American AI leadership. Through targeted guidance and carefully crafted exceptions, BIS and DDTC can demonstrate that export controls can evolve to address emerging technology challenges without sacrificing core nonproliferation objectives. The alternative—continued regulatory uncertainty that potentially deters rigorous biosecurity evaluations—risks creating exactly the dangers these regulations were designed to prevent.


Doni Bloomfield is an incoming Associate Professor at Fordham Law School and a researcher at Johns Hopkins Center for Health Security. His research focuses on intellectual property, antitrust, and health law. Before entering academia, he clerked for Judge Timothy B. Dyk of the Federal Circuit and Judge Patricia A. Millett of the D.C. Circuit, was a postdoctoral fellow at Harvard Medical School, and worked as a biotechnology reporter at Bloomberg News.
Joe Khawam is the Legislative Director at the Law Reform Institute, where he focuses on AI policy, export controls, and national security law. He previously served as an attorney in the Office of the Legal Adviser at the U.S. Department of State, where he worked on multilateral law reform issues as well as regulatory matters such as export controls and sanctions. He represented the U.S. government as head of delegation in several international negotiations at the United Nations Commission on International Trade Law and the Hague Conference on Private International Law. He also advised policy officials on economic sanctions, export controls, and transnational litigation matters. Previously, he practiced law in WilmerHale's Litigation Department, focusing on international arbitration and government investigations. He received his J.D. from Columbia Law School.
Tim Schnabel is president of the Law Reform Institute. He previously served as executive director of the Uniform Law Commission and spent a decade as an attorney at the State Department.
}

Subscribe to Lawfare