Armed Conflict Foreign Relations & International Law

How Nuclear Deterrence Can Inform Europe’s AI Strategy

Guy Ward-Jackson, Keegan McBride
Monday, May 11, 2026, 8:00 AM

Europe needs “AI latency,” not AI sovereignty.

Flags of the European Union in front of the EU-commission building "Berlaymont" in Brussels, Belgium (Christian Lue, https://unsplash.com/photos/blue-flag-on-pole-near-building-C241mbgtgys; Unsplash License)

Two months ago, leaders gathered in India to discuss the future of artificial intelligence (AI). One question was of particular importance: How can states retain their agency and sovereignty in the emerging AI-enabled geopolitical order?

For Europe, there is a paradox at the heart of this question. Europe must diffuse AI across its economies, public services, and governments, or else it will fall behind those that do—with major economic and security consequences. Yet becoming a competitor at the AI frontier—which is dominated mostly by the U.S. and China—is out of reach due to the staggering capital, energy, and research and development required.

Europe is therefore caught between a rock and a hard place: the economic reality of being unable to credibly compete at the AI frontier, alongside the geopolitical reality that AI is a fundamental component of future military, economic, and state capabilities. The combination of these elements renders Europe strategically vulnerable.

To thread this needle, a possible solution is the concept of “AI latency,” inspired by nuclear deterrence theory. The core point is that Europe doesn’t need to develop its own frontier AI system from scratch, but it does need the capacity to rapidly build a “good enough” version in a crisis scenario. This requires having the industrial, institutional, and talent resources necessary to quickly create a sufficiently capable alternative if access to top-tier AI is suddenly disrupted. Practically, AI latency involves using or training open-weight models, investing in shared computing resources and datasets, ensuring access to the right talent, and establishing clear thresholds for coercion or disruption to enable rapid replacement. In essence, it is using the traditional military-industrial idea of “surge capacity” for AI.

Europe does not need its own AI model—but it does need the capability to potentially build one.

A Diagnosis With No Strategy

Europe knows it has an AI capability problem, but the current debate around AI sovereignty swings between extremes.

At one extreme are calls for “strategic autonomy,” with initiatives such as the EuroStack seeing growing interest and support. Similarly, in some European countries, there have also been calls to build “sovereign models” from scratch. Yet these sovereign AI initiatives are economically unfeasible. Building a genuinely sovereign European digital stack has been estimated to require $300 billion and a decade of sustained investment—and that’s in an environment already strained by defense budgets, financial pressures, and geopolitical shocks. The idea of building fully “sovereign” models from scratch is therefore untenable.

At the other extreme are serious economists like Luis Garicano, who have been more realistic in calls for Europe to pursue an adoption-first strategy. This involves accepting that Europe will not compete at the frontier and should focus instead on diffusion and capturing economic value. This “second-mover” approach would argue that AI sovereignty is defined more by rapid adoption than by building at the frontier. Though economically rational, a purely adoption-focused strategy means Europe would have no fallback in the event of a crisis that impacted its access to frontier AI systems.

These extremes—full stack sovereignty at one end and pure adoption at the other—reflect the limitations of the European AI sovereignty debate. Europe is not wrong to worry about AI sovereignty, but it needs a better way to understand what “AI sovereignty” means and how it can feasibly be implemented.

How Europe Can Build AI Latency

The operational challenge for Europe is to find a strategy that leverages the efficiency of an adoption-first approach, while maintaining sufficient latent AI capability for reliability and resilience.

The idea of AI latency is drawn from nuclear strategy. The concept of “latent” nuclear capability is that advanced countries that do not possess nuclear weapons ensure they have the industrial capability, talent, and infrastructure to obtain them relatively quickly if circumstances deteriorate. Japan is the clearest example. It built a large civilian nuclear industry, accumulated significant stocks of separated plutonium, and developed the industrial depth required for rapid weaponization—all while remaining formally non-nuclear. South Korea has also debated acquiring a similar threshold capacity.

These principles are relevant to AI latency issues in Europe. A country or an alliance with underlying AI capacity would certainly feel the shock of a major disruption. However, it would possess the technical expertise and institutional resilience to quickly deploy open-weight models or even develop a “good enough” sub-frontier model from scratch.

Of course, articulating an AI latency strategy is easier than building the political conditions for one. Waiting for pan-European consensus risks the same fate as previous coordination efforts, so the more realistic path is a coalition of the willing. To participate, a country would need a minimum level of national open-source AI capability: a centralized national open-source lab, a strong AI company, or an equivalent capability that concentrates the open-source talent, infrastructure, and tacit knowledge needed to build and adapt models. Without this national layer, the coalition becomes a coordination structure with nothing behind it. Importantly, the national capability does not need to look the same nor need be built from scratch: For Germany it might be the Sovereign Tech Agency (which excels in software maintenance), for France it might be Mistral, and for the U.K. it could be a scaled-up version of the Incubator for AI or AI Security Institute (or some combination of the two).

Once the open-source leads at the national level are identified, a successful AI latency strategy would require five elements. First is a flagship program to distill capable models. The aim here is not to chase the frontier, but to build the technical capability to distill, fine-tune, and repeatedly adapt leading open-weight models, anchoring that expertise in a permanent lab or institutional body. Singapore’s SEA-LION program—developed through AI Singapore and built on open-weight architectures such as LLaMA and Qwen—offers a replicable model: a state-backed initiative that has built genuine sub-frontier capability in continued pretraining and fine-tuning, without attempting to compete with U.S. or Chinese frontier labs.

The second element of the strategy is investment in strategic datasets and shared compute infrastructure, so that high-quality domain data and the capacity to run serious pretraining exist on sovereign soil.

The third element requires building a deep talent ecosystem across the full model pipeline, from data engineering and evaluation to deployment and safety, so that knowledge of how models are built and adapted is held domestically rather than solely with foreign vendors. The most important thing here is to have Europe’s leading AI and open-source talent cooperating with one another, building those relationships and grounding them within Europe.

Fourth, the strategy would involve developing open-source capability across coalition governments. Countries involved would commit to some level of open-source procurement, and they would also need to build the technical capacity and muscle memory to routinely switch between model providers—so that replacing models within government and public services during a crisis is feasible. This also doubles as industrial strategy, allowing specialist Europe AI small and medium-sized enterprises (SMEs) to compete for contracts.

Finally, AI latency would require coordination on when to activate latent capabilities. This would operate much like traditional “surge capacity” for industrial production in wartime: flagship programs and continued coordination in peacetime, and then a trigger that drives ramp-up, or the building of a full-scale general-purpose model, in a crisis. Thresholds are critical here. If Europe cannot agree on what triggers a crisis—such as sudden export restrictions, regulatory divergence, or disruption to API access affecting critical sectors—then paralysis will set in when rapid substitution is needed most. A NATO Article 5 equivalent for AI latency would therefore be needed.

Crucially, investments in latent AI capacity are not idle reserves. They would strengthen Europe’s AI ecosystem in peacetime. The engineers who adapt and distill models would also drive economic growth day-to-day. The incentives for European AI SMEs to compete in specialist sectors and applications would increase adoption appetite as well—moving away from general-purpose models that corporations struggle to integrate meaningfully and toward specific use-cases where the value is clear.

Put simply: AI latency delivers long-term resilience and deterrence, while accelerating European capability and adoption in the near term.

Agency, Not Sovereignty

AI latency reconciles the three pressures that otherwise pull policy in different directions. It preserves the economic logic of the adoption-first approach by avoiding futile competition at the frontier. It builds resilience against major disruption without attempting full autarky. And it respects opportunity costs by focusing on capabilities that are productive in peacetime rather than on duplicative industrial monuments.

AI latency also addresses a fundamental conceptual challenge: reframing AI sovereignty from naive hopes of total control to a realistic argument about agency and optionality. Technological interdependence is here to stay; the real question is whether Europe can manage it. The next stage of Europe’s AI strategy does not mean owning the AI frontier, but it does mean ensuring it can keep the lights on in a crisis.


Guy Ward-Jackson is a senior analyst of science and tech policy at the Tony Blair Institute. Guy focuses on AI regulation and governance, AI for science, and the intersection of AI with national security, economic security and trade.
Dr. Keegan McBride serves is a Lecturer in AI, government, and policy at the Oxford Internet Institute, a Non-Resident Senior Fellow at the Foundation for American Innovation, and an Adjunct Senior Fellow in National Security and Technology at the Center for a New American Security. His research explores how new and emerging disruptive technologies are transforming our understanding of the state, government, and power.
}

Subscribe to Lawfare