Cybersecurity & Tech Foreign Relations & International Law

Ukraine’s AI Gambit Shows Middle Powers How to Play a Weak Hand

Jake Steckler, Sam Winter-Levy
Wednesday, April 29, 2026, 1:00 PM

Kyiv is turning battlefield data into strategic leverage. Other countries should take note.

Ukrainian flag in Kyiv. (Rawpixel, https://www.rawpixel.com/image/6918379; Public Domain).

On March 12, Mykhailo Fedorov, Ukraine’s minister of defense, announced that the Ukrainian military would make available millions of drone videos and other battlefield data to Ukrainian companies and allied nations to help train artificial intelligence (AI) models. “We must outperform Russia in every technological cycle,” Fedorov said, and “artificial intelligence is one of the key arenas of this competition.” According to the defense minister, partners would be able to train AI models on the data but would not be allowed to take possession of the videos themselves. The arrangement is also intended to accelerate Ukraine’s own development of autonomous drones and deliver new capabilities to the front line.

The announcement was easy to miss amid the daily churn of war news. But it deserves attention far beyond the battlefield. Ukraine is doing something that most of the world’s middle powers have so far failed to do: identifying a concrete source of leverage in a global AI race between the great powers that threatens to leave them behind.

*          *          *

The AI race is rapidly becoming a two-player game. The United States and China dominate the talent pool, the computing infrastructure, and the investment flows that drive frontier AI development. No other player comes close: The EU’s most ambitious response, for example, a planned network of “AI Gigafactories,” will be an order of magnitude smaller than rival projects abroad, scattered across member states, and years from coming online. A single American company, Google, now controls roughly a quarter of the world’s computing power, and U.S. AI firms attracted 75 percent of all AI investment last year. Countries such as France, India, Japan, and the United Kingdom have real economic heft and serious technical communities, but none of them can independently marshal the resources required to train and run the most advanced AI systems. That leaves them dependent on foreign providers for capabilities that will soon underpin critical sectors of their economies and their national security. And it leaves them vulnerable to the full spectrum of AI’s disruptive effects, from labor displacement to AI-enhanced cyberattacks, regardless of whether they reap any of AI’s benefits.

This dependency is qualitatively different from a reliance on imported oil or foreign-made weapons, which can be stored, diversified, or substituted over time. Access to a frontier AI model runs through infrastructure that a foreign provider can switch off at will. Today, that vulnerability is mostly theoretical. As the technology becomes more deeply embedded in critical infrastructure, however, the leverage it confers will grow. And neither Washington nor Beijing has shown much restraint when it comes to exploiting other countries’ technological dependencies for political gain. The Trump administration has already linked access to American AI technology to unrelated trade demands; China has repeatedly weaponized export restrictions on critical minerals. Open-source models are no panacea: The best of them usually lag a generation behind the frontier, and there is no guarantee that American or Chinese labs will keep releasing them. Middle powers have every reason to assume that AI access will eventually come with strings attached.

How should they respond? Building frontier AI systems from scratch is unrealistic for most: The infrastructure gap is enormous and widening. But middle powers are not powerless. They can look for positions in the AI supply chain that the United States and China cannot easily replicate or route around—bottlenecks where they can offer something indispensable. The Dutch semiconductor equipment company ASML occupies one such position: Every advanced chip fabricator in the world needs its machines to operate. South Korea and Japan hold others, controlling the memory chips and manufacturing equipment that AI systems depend on. Middle powers can also look downstream at the capacities needed to turn AI breakthroughs into real-world applications—such as advanced manufacturing, robotics, and domain-specific testing and integration. The key is to find niches that are continuously valuable rather than exhaustible in a single transaction, and then to use them to deepen partnerships and make access harder to curtail or weaponize.

Ukraine, improbably, may have improvised its way to a version of this strategy. It holds two assets that no other country can currently match. The first is its drone manufacturing ecosystem. Over the course of the war, Kyiv has built a sprawling network of domestic producers that have iterated at extraordinary speed under conditions of genuine military necessity—using cheap labor, scrappy manufacturing ingenuity, and the scaled use of 3D printing to reduce dependence on the Chinese supply chain while maintaining low costs. The United States, hampered by decades of prioritizing exquisite weapons systems and offshoring manufacturing, has struggled to keep up with Ukraine’s pace. The Defense Department’s Replicator Initiative, which aimed to field “multiple thousands” of drones from 2023 to 2025, delivered only hundreds. The Trump administration later launched the Drone Dominance program, a series of competitions for more than $1 billion in defense contracts; the first phase concluded in March with SkyFall, a Ukrainian fiber-optic drone company, taking first place by a significant margin. While the United States expends million-dollar surface-to-air missiles to defend against drones in the Middle East, Ukraine produces thousands of interceptor drones at a fraction of the cost. These are not capabilities that peacetime defense industries, for all their resources, can easily replicate.

The second Ukrainian asset is the data that those drones generate. Drones have surpassed all other weapons in inflicting casualties for both Ukrainian and Russian forces. While a large percentage of drone missions fail because of electronic warfare and other countermeasures, success rates are improving rapidly with the integration of specialized computer vision models. AI systems that can autonomously navigate and track targets would eliminate the need for a human pilot on the other end of a radio link and, with it, the vulnerability that accounts for most failures. To train those systems as well as AI weapons of the near future, developers need the kind of data that Ukraine is now offering: millions of hours of combat footage showing battlefield strikes and targets’ evasive maneuvers.

The United States, for all of its technological superiority, suffers from a deficit in high-quality military data. U.S. warships, for example, produce about 150 terabytes of raw sensor data every day, but according to the chief technology officer of the U.S. Naval Surface Forces, “We are capturing minuscule percentages of that data, which means we’re not able to apply that to train any sort of model.” The problem applies to most types of military AI models. The computer vision models that guide drones need specialized combat footage, exactly what Ukraine’s battlefield generates at scale. But frontier large language models, which are increasingly used for targeting prioritization and decision support, suffer from a similar issue: They were trained largely on the open internet, not on combat data, a gap that has inspired a wave of defense-focused AI startups, all of them hungry for training data that synthetic alternatives cannot reliably substitute.

Valuable data alone does not guarantee leverage; the deal structure matters too. Aware that a country sitting on valuable data may find its leverage diminished once it’s handed over, Ukraine has insisted that partners can train models on the data but cannot take the videos with them, a restriction that prevents a one-time transaction and forces continued engagement with the Ukrainian side. The datasets are managed by a dedicated center within the Ministry of Defense, which gives Kyiv a single point of control over access terms and the ability to update or withdraw permissions as the relationship evolves. And because the data is continuously refreshed by an ongoing war, partners who walk away lose not just what they have already trained on but the stream of new data they would need to keep their models current. A country in Ukraine’s position—dependent on a far more powerful partner whose commitment is uncertain—cannot credibly threaten to withhold access outright; doing so would likely be self-defeating. But it can structure its contributions so that they become embedded in ongoing relationships, creating dependencies that raise the cost of walking away. Data that flows once and stops is a fragile source of leverage, but data that keeps flowing under controlled conditions can underpin a partnership that both sides have an incentive to maintain.

Taken together, these assets give Ukraine a position in the military AI supply chain that few countries can match. Ukraine cannot build frontier AI models. But it can offer something that AI developers need—unique data and a unique testing ground—and use that to secure access to advanced capabilities in return. Its advantage spans both ends of the chain: Upstream, it controls real-world data needed to train models; downstream, it builds the scalable, battle-tested hardware on which AI systems will ultimately run.

To be sure, there are limits to how far Ukraine’s AI leverage can stretch. Battlefield data depreciates as conflicts change character, and the United States and Israel possess significant combat data of their own. Synthetic data, while no substitute for the real thing today, is improving rapidly. Ukraine’s manufacturing advantages could also narrow. Programs such as Replicator may have underdelivered, but they signal a genuine shift in the Pentagon’s priorities, and some recent initiatives show promise, though none has yet answered the question of scale. U.S. allies, too, are investing heavily in their own drone production. As other nations scramble to replicate Ukraine’s manufacturing success, Kyiv will have to tread carefully. Ukrainian President Volodymyr Zelenskyy has clamped down on drone exports that bypass government approval and demanded significant funding and technology from Gulf countries in exchange for counter-drone assistance. These are reasonable steps, but if Kyiv restricts access too aggressively, it risks pushing potential partners to invest in alternatives rather than deepen the relationship.

Still, these constraints are unlikely to negate Ukraine’s position anytime soon. U.S. and Israeli combat data, drawn largely from overhead surveillance and asymmetric operations against weaker actors, cannot substitute for Ukraine’s high-intensity, drone-saturated record of conventional warfare—the kind that the Pentagon is most worried about. And Ukraine’s data is not static: It is annotated by operators with direct combat experience and continuously refreshed by an ongoing war. No amount of peacetime exercises or simulation can replicate the complexity and unpredictability of a live battlefield at this scale. U.S. manufacturing deficits, meanwhile, run deep and cannot be undone in a few budget cycles.

If Ukraine’s advantages are likely to persist, Washington should recognize that it will be much easier to convert its AI superiority into real-world advantage if it can draw on the resources of its allies. Rather than treating Ukraine’s leverage as a nuisance to be minimized, it should seek to lock in access to Ukrainian strengths on terms that benefit both sides, whether through co-production agreements, joint data-sharing frameworks, or investments in Ukraine’s defense-industrial base. The war with Iran has underscored the urgency: Now confronting the very threats that Ukraine has spent years learning to counter, the United States and its Gulf partners have turned to Kyiv for help in defending against Iran’s Shahed one-way attack drones, despite President Trump’s repeated claims that Zelenskyy holds “no cards.”

*          *          *

The broader lesson extends well beyond Ukraine. Kyiv’s immediate goal is to sustain U.S. military support, not to secure a place in the global AI economy. But other middle powers, operating under less desperate circumstances, can apply the same logic to a broader set of objectives. They need to stop thinking about AI strategy primarily in terms of building models or buying chips and start thinking about how they can position themselves at key bottlenecks in the chain between raw AI capability and real-world impact.

For some, that will mean leveraging positions in the semiconductor supply chain, making their contributions integral to arrangements they want to preserve. For others, it will mean investing in the robotics and manufacturing capacity required to turn AI breakthroughs into physical products. Still others hold unique data streams with the same strategic properties as Ukraine’s battlefield footage—real-world, continuously generated, and reflecting conditions that synthetic alternatives cannot replicate. Germany’s small and medium enterprises and Japan’s precision manufacturing sector, for example, generate enormous volumes of sensor and process data that could help train embodied AI systems.

In every case, the logic is similar: Find something that the great AI powers need but cannot easily produce themselves, and use it to bargain for access, investment, and influence. Middle powers can also strengthen their hand by coordinating with one another. Ukraine has established long-term partnerships with the United Kingdom and Germany to jointly develop drone and counter-drone technologies, embedding its contributions in a broader allied framework that no single partner can easily disrupt. But middle powers that fail to find a foothold in the emerging AI order, either collectively or individually, risk bearing the costs of AI-driven disruption while capturing few of the benefits.

Ukraine did not choose its role as a laboratory for drone warfare and AI-enabled combat. But Fedorov’s announcement suggests that Kyiv is well aware of the assets that this terrible war has produced. For the rest of the world’s middle powers, comfortably at peace but drifting toward technological marginalization, the question is whether they can muster the same strategic clarity without the pressure of a war to concentrate the mind.


Jake Steckler is a research scholar at GovAI. He previously worked in the U.S. Senate, volunteered with frontline aid organizations in Ukraine, and served in the U.S. Army as an aviation officer.
Sam Winter-Levy is a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace.
}

Subscribe to Lawfare