Beyond Bans: Expanding the Policy Options for Tech-Security Threats
How policymakers, technical experts, and businesses should work together to develop a new toolkit to mitigate tech national security risks.

Published by The Lawfare Institute
in Cooperation With
In early April, President Trump granted TikTok another 75-day reprieve from its threatened ban in the United States. It is but the latest twist in a five-year, administration-spanning saga, in which the U.S. government has repeatedly threatened to ban the Chinese-owned app from the U.S. market if it is not sold to non-Chinese buyers—but has never followed through on such ultimatums.
While the TikTok case has some unique challenges, it is part of a broader trend of using bans to address national security risks associated with Chinese technology in the United States. After Chinese company DeepSeek released an innovative new AI model, members of Congress were quick to initiate a conversation about whether to ban DeepSeek in the United States. The government has already announced measures to ban certain connected vehicles from China and is working on similar restrictions for Chinese drones; reports suggest certain Chinese routers could also be banned. Beyond China, the last administration also banned the Russian antivirus provider Kaspersky—another example of how the government is using national security authorities in the tech supply chain.
There are plenty of real national security issues posed by technology from China and other foreign adversary countries across various elements of U.S. industries and tech supply chains. Such risks range from espionage, to “prepositioning” of malware (quietly putting malicious code in place that can be activated later), to increased leverage over U.S. supply chains, including for the defense industrial base. To better address this policy problem, however, the United States urgently needs to build policy toolkits—and policy muscles—beyond bans. Policy discourse about how to mitigate national security risks from a specific technology, such as a Chinese AI model or mobile app, all too often results in reductive conversations about whether or not to ban such technology. But this dichotomy leaves policymakers with an unappealing choice: Either ban any technology that poses a risk, or—if unwilling to follow through with an action as dramatic and costly as a ban—do nothing, and leave the American public exposed to potential national security risks as a result.
American policymakers need a spectrum of responses to foreign technology risks that appropriately balance trade-offs in economic costs; Americans’ access to online services; supply chain entanglement; transparency; domestic imperatives like privacy and civil liberties; and the ability to convince allies and partners to act alongside the United States, where relevant. Such a toolkit—encompassing technical, governance, and commercial mitigation measures—at present often comes up short of a robust, comprehensive approach to contemporary tech supply chain and national security risks, leaving the U.S. vulnerable and policymakers without more tailored options to act on potential threats.
To Ban, or Not to Ban
Bans are sometimes an appropriate response to a national security risk from a piece of technology. When national security risks are high, mitigation is difficult, and bans will have limited negative consequences on considerations such as American firms’ ability to compete globally or American citizens’ access to online and communication services, imposing a ban is a logical policy action. The U.S. government, for example, was right to ban telecom equipment from Chinese company Huawei from use within the United States. The risks were outsized (Huawei embedded in equipment throughout U.S. networks, possibly enabling espionage and disruptive cyber activities); the risks were difficult to mitigate through other means (a single software update, especially in software-driven 5G networks, could introduce core flaws or malicious code well after pre-deployment screening); and the negative consequences of a ban were fairly limited (or at least would have been if the diplomatic messaging was better handled).
But a huge number of other Chinese technologies in the U.S. market may also pose some risk to the United States for which bans are impractical solutions. The technology that could be hypothetically covered under the Commerce Department’s information and communications technology and services (ICTS) supply chain program alone, for example, ranges from AI models, antivirus programs, and mobile apps to connected vehicles, wireless keyboards, and smart refrigerators. The Trump administration has argued this program was “underutilized” in the last administration and stated that it intends to expand the “scope and remit” of the office. Yet enforcing bans on all of these goods would overwhelm U.S. policymakers and national security staffers with limited time and resources. Moreover, unwinding all interdependence in all global tech supply chains will impair American firms’ own ability to participate, innovate, and compete in technology at home and abroad. Widespread bans could also impose significant costs on U.S. consumers, sparking unsustainable inflation and shortages. It will also become increasingly difficult for the United States to convince partners and allies to align with U.S. policy toward Chinese tech if the government adopts sweeping, indiscriminate bans.
Given the immense costs of widespread bans, it is unsurprising that U.S. policymakers have often been reluctant to follow through on enforcing bans on a piece or type of technology. The problem, however, is that many of the short-of-ban mitigation solutions that have been identified to date appear inadequate or incomplete.
Under several national security regulatory programs—such as the Committee on Foreign Investment in the United States (CFIUS) (meant to address risks associated with foreign ownership of U.S. entities), Team Telecom (which reviews certain Federal Communications Commission licenses and applications for national security risks), and Commerce’s ICTS program (covering U.S.-related tech supply chains)—policymakers can impose mitigation requirements short of a ban. In practice, however, U.S. officials have often not found mitigation measures a viable alternative for addressing risks from adversaries. For instance, one of the long-standing proposals to mitigate security risks associated with TikTok while maintaining Chinese ownership is to have a trusted U.S. partner review ByteDance’s source code to look for malicious code. Yet experts agree it would be next to impossible for outsiders to identify code maliciously hidden within the millions line of code text, particularly at the rate that software applications are updated.
Building Better Approaches
Stuck between a rock (the fact that banning all Chinese tech that poses a risk is expensive and impractical) and a hard place (the fact that many existing mitigation proposals are inadequate), what are policymakers to do? One answer is to prioritize banning the technologies that pose the greatest risks. For instance, the Commerce Department’s ICTS office has published a technology prioritization list that provides guidance on the technologies for which the office intends to focus its limited resources. But more can and should be done to develop clear principles to identify which tech needs to be banned, and to integrate such principles across the various government offices and authorities charged with carrying out this work. Bans may also be narrowly tailored (what some dub partial bans) to certain subparts of a system, certain portions of a market, or certain geographic areas. Pursuing a partial ban on a technology for national security reasons could constrain the breadth of the ban’s impact.
But this is an incomplete and unsatisfactory approach. As noted above, there is a broad range of technologies that pose a risk large enough that they merit some response, but not so large a risk that they must be banned—or where a ban may shift the risk space (such as compelling the adversary to move to other means of siphoning data or infiltrating a supply chain) that would present further or greater security complications. Prioritizing which technologies to ban and modifying the details relies on the same tool for the same purpose, without developing and using other tools.
There is thus an urgent need for policymakers, technical experts, and private-sector leaders to work together to develop a menu of new policy ideas for mitigating national security risks associated with technologies from foreign adversaries.
What might such a menu include? To begin with, there are a number of technical approaches, particularly with respect to data processing and governance, that might mitigate some of the risks associated with Chinese technology. For example, advances in privacy-enhancing technologies (PETs) and techniques, such as federated learning, could (when implemented well) enable analysis of sensitive data on a device and even AI model training without needing to send the underlying data to a central server. This could be used to help address particular scenarios associated with a Chinese firm’s potential access to U.S. persons’ data. Similarly, if there is concern about how data from an app could be intercepted in transit by an application owner, the types and strength of an app’s encryption, how the data is processed, and how the encryption keys are managed could also mitigate particular risk scenarios.
There are also a range of corporate governance-related measures—including efforts to insulate companies from Chinese government influence and to provide the U.S. government and U.S. companies with greater oversight and influence of companies with Chinese ties—that could alleviate some national security risks. The government could also consider imposing various commercial restrictions, such as “know your customer” (KYC) requirements, in order to mitigate national security risks, such as a company without proper controls providing services to a foreign adversary-linked university or sending products to an unscrupulous, overseas reseller. Policymakers could require companies to subject themselves to rigorous, independent audits by third parties, provide network diagrams and other technical schema to policymakers (like the kind that Team Telecom compels in its mitigation agreements for telecom security), and even train their employees on access controls and insider threats with a geopolitical lens.
Lawfare’s past Trusted Hardware and Software Working Group identified many other trust metrics and risk mitigation measures across technology, corporate governance, and law. These included the use of open, nonproprietary technical standards and implementation that can be examined publicly; organizational measures that are transparent and auditable; and the use of provable, analytic means of trust verification (relying on a tech system’s performance criteria and other metrics) over axiomatic, nonverifiable means (assuming certain things about a tech system’s behavior and basing trust on those assumptions). But more research is needed on all of these measures to fully assess their strengths, weaknesses, and most relevant applications for mitigating risks at tolerable costs. In other words, if the risk can’t be eliminated (and many risks, like hacking, cannot), how much risk is acceptable to policymakers?
This leads to an important point: Policymakers must be clear-eyed about the fact that none of these measures are perfect or work for every risk model. Take the federated learning example. If the national security risk scenario in question is the Chinese government accessing individuals’ data by demanding them from a Chinese-owned app company, perhaps federated learning would be a satisfactory technical mitigation. (Like all technologies and data-related techniques, it is also susceptible to outright circumvention and privacy attacks.) However, if the risk scenario in question is not Beijing demanding a Chinese firm hand over individual data points, but concern about Americans’ data being used to train a sophisticated English-language AI model in China, techniques that still enable AI training on the data (whether the data leaves the device or not) are inadequate to mitigate the risk in question. Likewise, KYC rules might mitigate the risk that a U.S. company sells security-related tech to a Chinese military entity—but would fail to mitigate risks related to insider threats, poor encryption, or the creation of threats through sales to a “legitimate” corporate entity in China.
As these mitigation measures are developed, policymakers should keep certain general principles in mind. To be effective, mitigation measures should be designed for a zero-trust environment. That is, mitigation measures should not merely be paper promises from a company that a determined adversary could easily circumvent; the purpose is to develop tools that allow even nontrusted tech entities to operate in the U.S. market without posing meaningful risks to U.S. persons or U.S. national security. Additionally, mitigation frameworks should avoid requiring costly ongoing oversight from an already-taxed government bureaucracy; the objective should be to identify solutions that make such oversight unnecessary. (Notably, the Trump administration appears to concur, as the “America First Investment Policy” memo recently released by the White House is skeptical of “overly bureaucratic, complex and open-ended” mitigation agreements in the CFIUS context.)
The government’s mitigation toolkit should also have clear standards and be deployed in an objective, depoliticized process, which will help engender trust among the U.S. public that such measures are in fact effective tools for mitigating national security risks. This means some degree of transparency for involved companies, congressional and executive branch stakeholders, and the public; minimizing political opining about specific, ongoing matters; a rigorous methodology within the program to qualifiably and quantifiably evaluate the landscape of risks; and opportunities for appeal and complaint. And the toolkit should be designed, as much as possible, to be interdependent with policymaking processes in partner and allied countries, allowing these governments to more easily adopt similar measures and promote integration across partner and allied markets.
Moving Toward Mitigation, One Way or Another
Ultimately, as the U.S. government continues to expand its focus on technology security risks, it will likely turn toward greater use of mitigation instruments out of necessity. For instance, the current connected vehicle rule would in principle ban Volvo dealerships in the United States, since the company is owned by Chinese parent company Geely. In practice, it appears more likely Volvo will seek a “Specific Authorization” to continue operating in the U.S. market, which will require mitigation measures. Similarly, as the Commerce Department moves forward with a new rule on national security risks associated with Chinese drones, mitigation may feature heavily in the final policy: At present Chinese-based companies make up some 75 percent of the U.S. consumer market, rendering a full ban impractical. And depending on the structure of any final TikTok resolution, it too may require mitigation measures to address ongoing risks associated with Chinese control of the app’s key algorithms.
Now is the time for the broader tech and national security stakeholder community to work urgently to develop creative, pragmatic approaches for mitigating national security risks associated with technologies in foreign countries of concern. None of these measures will be a silver bullet for mitigating risks, especially when the foreign government in question is a sophisticated and well-resourced adversary. But by developing a menu of modular, tailorable policy responses, policymakers will be better positioned to match specific national security risks with appropriate mitigation measures.