Cybersecurity & Tech Foreign Relations & International Law

The Sovereignty Gap in U.S. AI Statecraft

Pablo Chavez
Monday, February 16, 2026, 5:00 AM

Washington is offering partners AI sovereignty on its terms, even as many countries work to reduce dependence on the United States. 

Made in America Product Showcase, July 17, 2017 (Official White House Photo by Evan Walker; Public Domain)

As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Through the White House Office of Science and Technology Policy and the emerging American AI Exports Program, the administration is seeking to position the United States as a partner that can help countries build sovereign artificial intelligence capabilities using American technology.

But there is an irony to this: The concept of AI sovereignty is one that many countries are developing specifically to reduce their reliance on the United States. The traction that sovereign AI is gaining around the world reflects, in significant part, unease about U.S. policy. Many countries developing AI systems are hedging against the possibility that Washington will change the rules, restrict access, or use technology dependence as leverage. That hedging is pushing partners toward notions of sovereignty that may be incompatible with what the administration is prepared to offer. That offer might look like a reasonable middle ground in a more stable policy environment, but it’s less attractive in a period marked by tariff disputes with allies and partners, questions about multilateral commitments, and rising tensions within alliances.

Whether partners will accept Washington’s version of AI sovereignty is a central question for U.S. AI statecraft. The answer will shape whether U.S. sovereign AI efforts reinforce reliance on the United States or accelerate other countries’ hedging strategies. 

Sovereign AI’s Trajectory

“Sovereign AI” lacks a single agreed-upon definition, but at its core the term reflects national governments’ desire to place the development, deployment, and control of AI models, infrastructure, and data in domestic hands. Countries like India are adopting their own variations of this model. For example, last year India’s Ministry of Electronics and Information Technology (MeitY) stated: “Sovereign AI refers to a nation’s ability to independently develop and manage AI technologies to maintain control over its data, ensure privacy, and address specific local needs.”
MeitY’s definition captures only part of what drives sovereign AI. Across the national initiatives I’ve tracked, governments are usually responding to six recurring pressures: first, keeping sensitive data under domestic jurisdiction and limiting exposure to foreign legal compulsion; second, sustaining continuity and resilience of critical AI services amid disruptions; third, ensuring security, compliance, and domestic oversight under national law; fourth, promoting economic development and domestic capability-building; fifth, reducing single-provider dependence and vendor lock-in; and sixth, preserving national languages and cultural context.

In practice, this has translated into two main strategies: building national AI models and acquiring and managing access to domestic computing infrastructure. I surveyed these efforts in a November 2024 Lawfare article. What has changed since then is the trend’s scale and explicitness. In 2024, I tracked roughly 40 government-backed sovereign AI projects across approximately 30 countries. By January 2026, the number of projects had more than tripled to nearly 130 across more than 50 countries, data that will be published later this year as part of a Center for a New American Security sovereign AI index. Countries are increasingly framing these efforts—which are projected to continue growing—in overtly sovereignty-based terms, and as alternatives to dependence on foreign technology. 

India perhaps best illustrates the shift. The IndiaAI Mission, led by MeitY, is an integrated national program that weaves together projects across India’s AI stack and encompasses both private- and public-sector actors. It includes a curated data layer with access controls and a sandbox environment (AIKosha), a government-mediated system for allocating subsidized compute across multiple providers (Common Compute Capacity), and funding for Indian-language large language model programs such as BharatGen and Sarvam AI (the IndiaAI Innovation Centre). The result is growing national capacity to shape who gets compute and other AI resources, on what terms, and for what purposes.

India still relies on foreign technology, and U.S.-origin chips and tooling remain central across both public- and private-sector deployments of its AI stack. But by consolidating domestic data assets, allocation authority, and model development under government coordination, New Delhi is building alternatives across the layers of the AI stack that it can control—a hybrid sovereignty posture, with both layered local oversight and continued reliance on parts of the American stack. Indian officials have been clear that these programs are designed to reduce dependence on foreign AI infrastructure. 

The Trump Administration’s Response

The Trump administration has elevated “sovereign AI” as part of its international AI posture, with the White House Office of Science and Technology Policy (OSTP) serving as its most explicit public proponent of the concept.

In recent congressional testimony, OSTP Director Michael Kratsios framed the administration’s goal as enabling U.S. companies to provide “modular AI stack packages” that empower countries to develop “sovereign AI capabilities with American technology.” At the APEC Digital and AI Ministerial in August 2025, OSTP grouped “AI sovereignty” with “data privacy” and “technical customization” as outcomes partners should expect from U.S. AI export packages. And in September 2025 remarks at the U.N. Security Council, OSTP described the administration’s aim of enabling partners to build “sovereign AI ecosystems” using “secure American technology,” tying sovereignty to trusted technology choices and shared security interests.

Taken together, this is a deployment-layer conception of sovereignty: operational control at the point of deployment, rather than independence from upstream technology suppliers. For some countries, that may be entirely sufficient. Partners decide how AI systems are configured and what rules govern their use. But key dependencies underneath that deployment layer—including advanced chips, frontier AI models, and cloud infrastructure—remain U.S.-originated or U.S.-controlled, even when facilities are built and operated locally by domestic providers.

The same OSTP statements are notable for what the administration has not publicly committed to. Among other gaps, its “sovereign AI” framing stops well short of the full stack—it does not extend to supporting partner-country chip development, for instance. But even within the data, model, and compute layers, significant gaps remain. OSTP has not suggested that partners will gain the capacity to train frontier-scale models on their own soil (though in May 2025, the administration signaled support for such training in the Gulf states, and emerging open-weight alternatives potentially reduce the impact of any U.S. restrictions on frontier models). The framing is also silent on whether sovereign AI ecosystems built with American technology can incorporate non-U.S. models, including Chinese ones, even though for many partners that optionality is central to what sovereignty means.

Nor does the framing address the relationship-level concerns that drive much of the sovereign AI impulse. OSTP has not offered any guarantee of uninterrupted access to the most advanced U.S. AI resources independent of export licensing and policy discretion. It does not frame sovereignty as encompassing portability or exit rights that would allow partners to continue operating if the vendor relationship or U.S. policy changes—concerns European officials have described in terms of a technology “kill switch.” And the framing says nothing about whether these packages include data residency guarantees—a baseline expectation for many partners when they hear “AI sovereignty.”

That gap matters because many governments—as the Indian example demonstrates—seek AI sovereignty in part as a hedge against concerns like tech dependence and other countries’ policy discontinuity. A deployment-centered sovereignty offer answers only part of that demand.

Some distance between what the United States offers and what partners want is probably inherent to the former’s position in the AI ecosystem. Any U.S. administration would face a version of this tension. But the magnitude of the gap is not fixed. It widens or narrows depending on whether partners trust Washington to maintain consistent rules, honor commitments, and refrain from using technology dependence as leverage.

Countries—allies, partners, and others—are making these calculations against the backdrop of changing trade relationships, uncertain multilateral commitments, and challenges to allied sovereignty. Reasonable people might disagree about whether these policy trends serve American interests. What is harder to dispute is that these trends are accelerants that intensify concerns about dependence and push more governments to diversify away from the American AI stack.

Sovereign AI and the American AI Exports Program 

The American AI Exports Program (AAEP) is the clearest vehicle through which the administration’s sovereignty offer is expected to take concrete form, at least for a subset of priority deals. It’s designed as a selective export-promotion channel; thus, most U.S. AI exports will continue to flow through ordinary commercial channels. But the program is significant because if implemented, it will concentrate U.S. government attention on a select set of markets (some of them small but strategically consequential), crystallize the transactions that the government is prepared to support through diplomacy and financial backing, and operationalize the administration’s definition of “sovereign AI,” which will likely have ripple effects beyond the AAEP.

The AAEP can plausibly offer deployment control as a bargain: faster deals, government-backed financing, and diplomatic support in exchange for conditions. The risk is that the OSTP sovereignty label moves beyond the program. Once it applies more generally, the bargain disappears but the constraints remain, reinforcing the distrust—and subsequent hedging—the policy is meant to counter.

Here’s how it’s supposed to work. The Department of Commerce is building the program around designated industry consortia and targeted markets. The consortia assemble exportable AI packages that combine hardware, cloud infrastructure, models, and applications (thus the modularity language proposed by OSTP). Approved packages would then be eligible for coordinated U.S. government backing—including commercial diplomacy and, potentially, priority access to export credit, loan guarantees, and development finance assistance. 

Most recently, Commerce issued a request for information (RFI) to gather industry input before opening the call for consortium proposals. The responses underscore that, despite OSTP’s vision for the program, the meaning of “AI sovereignty” under the AAEP is still taking shape.

Several companies flagged six recurring concerns that likely reflect international customer feedback—which echo many of the drivers of sovereign AI noted earlier—that they believe the AAEP should address: control over data, control over compute, control over models, control over deployment options, assurances on legal exposure and government access, and continuity risk (what happens if the U.S. government disrupts or withdraws access to U.S.-provided AI services). The legal exposure and continuity risk concerns, in particular, may exceed what the administration’s deployment-layer conception of sovereignty is prepared to offer.

The Future of U.S. Sovereign AI Exports

Several variables will shape how the AAEP’s sovereignty offer develops and whether partners experience it as meaningful autonomy. 

Deployment Sovereignty and Supply Chain Resilience

One is the unresolved relationship between deployment sovereignty and supply chain resilience. The AAEP is built around exportable deployment packages. Resilience—protection against disruptions such as continued access to advanced compute—is a different problem than deployment sovereignty, and one the administration’s “sovereign AI” framing generally does not address.

The closest adjacent vehicle is Pax Silica, a U.S.-led declaration among countries with complementary roles across the AI supply chain. Pax Silica doesn’t use the language of sovereignty. Instead, it is aimed at reducing “excessive dependencies” on China in particular (though the document does not explicitly state that), and will “endeavor to provide access to trusted partners to the full stack of technological advancements that are shaping the AI economy.” In practical terms, the declaration is a coordination pledge on investment-security practices, infrastructure and incentives, and enforcement cooperation across multiple layers of the AI supply chain, including frontier models, semiconductors, advanced manufacturing, logistics, minerals processing, and energy.

That framing—which is still evolving—is upstream of the AAEP. The exports program is built around exportable deployment packages. Pax Silica is about co-production, inputs, and resilience. These are not inherently contradictory, but the AAEP, as currently configured, concentrates control by positioning the U.S. as the AI diffusion hub, with partner countries as endpoints for American stacks. By contrast, Pax Silica distributes control by recruiting co-producers across the supply chain.

If partner “access” is operationalized mainly through U.S.-approved consortia under the AAEP, continuity risk and gatekeeping power remain concentrated in Washington. If instead it is operationalized through shared production and diversified inputs, partners gain leverage through interdependence because keeping the network running becomes a collective problem rather than a discretionary American decision. For many countries, the practical alternative to full supply chain independence is to become co-producers within a network that reduces single-point dependencies and enhances resilience under political stress. Not every partner can play that role, though. For some, the best achievable outcome is a more resilient deployment posture and clearer terms on what happens when U.S. policy shifts.

Scope of the AAEP 

A second variable that will determine how the AAEP’s sovereignty offer develops and whether partners experience it as meaningful autonomy is scope. The AAEP could remain a targeted program. It could focus, for example, on Global South countries where governments want AI infrastructure but face financial constraints and a narrow menu of options, and where Chinese companies are increasingly offering turnkey, affordable solutions that U.S. firms might otherwise not prioritize because of potentially low return on investment and high operational challenges.
But if the U.S. decides to expand the AAEP to already-active AI markets, then the program’s impact may be more limited and perhaps counterproductive. Take India as an example, where AI diffusion is already underway through ordinary commercial channels. In addition to government-led efforts, domestic operators are offering AI cloud services. U.S. hyperscalers already run multiple in-country regions with managed AI services and are expanding onshore AI infrastructure at scale—often via local partnerships

In that environment, AAEP-style packaging and U.S. government facilitation add little if any marginal value to transactions that would occur anyway. In fact, a program designed to accelerate diffusion could become an additional procedural layer that slows deal cycles, politicizes routine commercial activity, and signals that “approved” or at least favored AI deployment requires a U.S. government channel even in markets that are already moving. 

Washington’s Calculus 

A third variable is how much discretion Washington is willing to trade for partner confidence. For many partners, “sovereignty” turns less on configuration control at deployment and more on whether access can be disrupted after the deal is signed. The AAEP can’t solve that problem through product packaging alone. The deciding question is whether AAEP participation comes with an assurance layer that reduces uncertainty through tools such as clearer non-disruption and suspension terms and practical exit and portability rights if the vendor relationship or U.S. policy shifts. These steps would not eliminate U.S. leverage, but they would narrow the sovereignty gap for the subset of partners whose core demand is continuity under political stress. 

Jurisdictional exposure is the second part of that assurance layer, and it can’t easily be engineered away. Under the CLOUD Act, for example, a provider subject to U.S. jurisdiction can be compelled to disclose communications and records within its “possession, custody, or control,” regardless of where the data is stored. Client-side encryption and customer-held keys limit what a provider can produce, but they don’t eliminate the underlying legal lever. For partners that define sovereignty as insulation from U.S. compulsion, the strongest mitigation is keeping the most sensitive workloads under a domestically controlled operator. Short of that, many will treat U.S.-provider jurisdictional exposure as a structural constraint on “sovereign” deployments, unless Washington offers clearer process commitments and predictable guardrails around disruption and compulsion risk. 

Partner Drift

A fourth variable is partner drift, which is an external constraint the AAEP can’t necessarily control, but one it will have to contend with. The AAEP’s sovereignty pitch is arriving as allies are embedding sovereignty requirements in procurement rules in ways that may be incompatible with OSTP’s deployment-centered conception. The European Commission’s Cloud Sovereignty Framework, for example, includes a “Data & AI sovereignty” objective that explicitly ties sovereignty to minimizing dependency on non-EU technology stacks.

In that environment, an American-stack sovereignty offer will be scored against systems that treat foreign jurisdictional exposure as a structural deficit. But not all of these requirements reflect genuine security or resilience concerns. Some function as trade barriers that discriminate against foreign providers under the cover of sovereignty. The challenge for U.S. statecraft is distinguishing between the two and addressing the legitimate demands credibly enough to contest the rest.

Sovereignty, Continuity, and Trust 

The New Delhi summit will produce announcements and perhaps new agreements. But even if the AAEP succeeds as a targeted diffusion tool, it is unlikely to close the sovereignty gap for governments that define sovereignty as insulation from U.S. government discretion and reach. It may meet some countries where they are today by delivering deployment-layer control on attractive terms. But unless it is paired with credible continuity and jurisdictional assurances, along with a complementary resilience strategy, many will treat an American-stack package as a useful floor while continuing to build options that reduce U.S. dependence. How quickly that happens will depend not only on what countries build but also on whether Washington provides them with good reasons to trust the United States as a credible, predictable, and reliable partner.


Pablo Chavez is an Adjunct Senior Fellow with the Center for a New American Security's Technology and National Security Program, a Non-Resident Senior Fellow at Georgetown's Center for Security and Emerging Technology (CSET), and a technology policy expert. He has held public policy leadership positions at Google, LinkedIn, and Microsoft and has served as a senior staffer in the U.S. Senate.
}

Subscribe to Lawfare