On the Path to AI Sovereignty, AI Agency Offers a Shortcut
More than AI sovereignty, strengthening AI agency can create a powerful buffer against volatile geopolitical headwinds.
.jpeg?sfvrsn=30f1fe1b_4)
Published by The Lawfare Institute
in Cooperation With
Artificial intelligence’s (AI’s) explosive evolution from a niche field of academic study to a ubiquitous and loudly hyped headline has created a pressing sense of urgency as countries vie for superiority. No doubt intensified by the Trump administration’s aggressive “America First” posture, many countries have fundamentally reevaluated how tightly coupled they should allow their nation’s AI infrastructure to be to the whims of the U.S. or other AI superpowers. In response, many are turning to the strategic goal of sovereign AI—an approach to AI that prioritizes geopolitical independence and autonomy over a nation’s AI stack, which includes the data, compute, energy, and talent necessary to run AI. AI sovereignty, in its strictest sense, implies full ownership of a great deal of costly infrastructure—a feat that the world’s AI superpowers largely don’t currently achieve thanks to a complex global supply chain.
Most countries have pursued two main priorities in their quest for sovereign AI: building sovereign AI models and investing (significantly) in the local compute infrastructure and data resources necessary to run AI models. For example, Singapore has invested more than $50 million to develop a large language model tailored for Southeast Asian languages. Taiwan’s similarly ambitious efforts with TAIDE, or Trustworthy AI Dialogue Engine, started small—with roughly $7.4 million—but may demand billions to reach greater uptake.
AI sovereignty has increasingly resonated among countries in the “global south,” which have long desired greater independence from the whims and influences of powerful foreign actors. For these countries facing an ever-widening AI divide and a world order that largely leaves them out of consequential discussions around AI’s evolution, AI sovereignty offers a vision for rejecting geopolitical unpredictability, as well as an unequal and unjust status quo.
But paradoxically—especially for nations in the global south who may be joining this AI race without the luxury of economic advantage—defining and enacting AI sovereignty ultimately risks undermining its stated goals. Nations with limited market strengths seeking greater agency over their AI futures can benefit from considering how middle powers and small states have fostered conditions to support greater independence, autonomy, and resiliency in AI.
Countries in the global south have found their footing in the AI terrain, and efforts to assert greater ownership over AI have emerged as a result. Early investments in sovereign compute infrastructure have already amounted to hundreds of millions of dollars for BRICS powers such as Brazil and India, with future commitments promised in the billions. Throughout Africa, long-running conversations about data sovereignty have shaped AI strategies to promote AI sovereignty on a continental scale. For example, in March, Zimbabwean billionaire Strive Masiyiwa announced plans to build Africa’s first “AI Factory” by the end of June. What’s more, April’s Global AI Summit in Kigali led to a continental Declaration on AI that invokes sovereignty as a first guiding principle for African AI development and deployment. In Southeast Asia, many countries have embraced AI sovereignty in their framing for AI investments and initiatives. For example, Indonesia’s private sector has invoked Indonesian AI sovereignty in partnering toF build a Bahasa Indonesian-serving large language model (LLM), and Malaysia has made significant investments to develop its own LLM and sovereign AI cloud.
Strong political and corporate pressures have created an increased interest in AI sovereignty. In the global south, national decisions about how to build AI infrastructure necessary for closing the AI divide—including data centers, data sharing frameworks, energy infrastructure, and internet connectivity—often require partnership with companies headquartered abroad. These arrangements easily become mired in geopolitics, as many find themselves in the familiar but unwelcome position of being forced to demonstrate strategic alliance based on whether they opt to partner with companies in the U.S. or China. And companies selling chips and other data and compute resources have long used AI sovereignty as a successful sales tactic. They benefit from a narrative that sovereign ownership of an AI stack—and the tools required to maintain it—would strengthen national security and unlock gains that countries simply can’t afford to miss out on in the race against their neighbors.
President Trump’s policies and personnel changes have so far aimed to double down on American tech dominance, revoking Biden-era efforts to prioritize more equitable, inclusive access to AI while lambasting and crippling efforts to rein in concentrations of power in the U.S. Paired with the existing concentration of power in the U.S. tech sector, these moves have caused justifiable concern for those looking to dictate their own terms for how AI advances in their countries.
The U.S. has a lot to lose, commercially and politically, if countries go their own way with AI. But governments in the global south in particular have grown justifiably wary of falling in line to adopt policy postures that may please “global north” governments but do little to serve their own people.
The U.S.-dominated AI industry’s current business model demonstrates an inclination toward dependency. Even as offerings such as reduced-cost cloud credits with a U.S.-based cloud services provider (CSP) may ease the burden of leveraging AI for near-term benefits in emerging markets, typical arrangements with CSPs can effectively lock users into a U.S.-controlled platform or even limit access arbitrarily. This is a common frustration for businesses worldwide, but it puts nations in a precarious position when important public services such as health care, benefits distribution, or even critical infrastructure like electrical grids, rely on foreign-owned and -operated AI infrastructure. For these countries, this dependence—coupled with an unpredictable and/or volatile policy environment in the host country—begs the question: What happens when access to these CSPs is turned off?
AI sovereignty is motivated not just by profits or the geopolitical chess match surrounding AI; it’s also a direct response to the need for cultural preservation and protection. And AI models often reflect the priorities and performance of their builders. Therefore, AI’s wholesale import is a troubling strategy for many non-Western countries. In the global south, entrusting global north actors with control over training data resources; AI model development, tuning, or maintenance; or AI model auditing or oversight can all too easily introduce flaws such as bias, homogenization, and poor performance for groups or contexts not well represented in training data or by model development teams. But much of the data used to train common commercial models like ChatGPT have historically come from Western cultures, and they often embed social biases that reflect the context of their origin. These flaws undermine not just AI’s ability to deliver for those in the global south but the ability of people to trust AI in the first place. Especially for countries navigating AI against a backdrop of colonialism or entrenched power asymmetries, seemingly benign features of a model—for example, an LLM working only for English-speaking users—can reinforce long-standing forms of exclusion or marginalization. As Nigeria’s minister of communications and digital economy, Bosun Tijani, has said, AI is simply too consequential to import, as it increasingly shapes “what you think, how you think, and how you operate.”
Achieving AI sovereignty also presents undeniable practical concerns. For example, it’s very expensive. Even in wealthier nations, private companies struggle to keep up on necessary investments in critical components of AI infrastructure, like data centers and advanced compute hardware like GPUs and TPUs. There are also significant environmental impacts. AI’s projected energy consumption is now so severe that it’s prompted a global reassessment of nuclear energy as a power source. Data centers’ (alarmingly undocumented) water consumption has led to civil unrest and concerns of further marginalization of already-edged out groups. Generative AI’s impacts on e-waste are pronounced and growing, with already staggering amounts—billions of kilograms each year—of “off the books” transmission coming from higher-income countries to lower-income countries. Still, acknowledging these difficulties does not mean resigning to accept the status quo. AI is too consequential to allow for a world order that assigns which countries “consume” and which “produce” the technology. Global south governments—as well as those operating outside of government, such as private companies and civil society organizations—can choose to simply focus on what works to build greater agency over how AI intersects with their national socioeconomic realities. And, as the global AI divide threatens to grow into a chasm, actors in the global north can do far more to resist casting AI sovereignty into a zero-sum mold.
Global south governments can opt to strengthen their AI agency by working with neighboring or peer countries to better coordinate and cooperate on AI infrastructure development. There’s strength to be had in numbers, and collective action with regional or like-minded partners can increase nations’ bargaining power with AI superpowers for greater control throughout the AI supply chain. These countries could band together for fairer or less extractive joint trade agreements with multinational companies. They could also align multilaterally in negotiations in international fora, as the G77 group of developing countries successfully did during the 2024 UN Summit of the Future. Collective action could also mean structuring more favorable co-investment terms for shared regional infrastructure. Forming regional arrangements for standardizing secure sharing of data with neighboring countries, for example, would allow for more robust, regionally representative datasets, which can be game-changing for regions poorly represented in the dominant AI training data of today. It can also create greater bargaining power around data protections in cases where data storage or data access is provided by extranational companies.
Countries can and should take advantage of existing fora to facilitate regional cooperation. For example, African nations have, through the African Union, created the Continental AI Strategy and launched the African AI Council, which will offer a natural forum for identifying and advocating for shared continental priorities around AI. Caribbean nations—alongside the United Nations Educational, Scientific and Cultural Organization (UNESCO)—have also launched the Caribbean AI Initiative as an effort to share both the burden and the benefits of developing AI to better serve the region. Other existing fora—such as the Economic Community of West African States, the East African Community, or the Association of Southeast Asian Nations (ASEAN)—provide a starting point to help smaller countries with fewer financial resources connect with other allies to facilitate community and coordination. And beyond regional alliances, nations across the global north-south divide can benefit from stronger cooperation with similarly sized or oriented nations. For example, last year, Singapore and Rwanda—both members of the Digital Forum of Small States—jointly launched the AI Playbook for Small States, which provides both strategic and tactical guidance tailored to the unique economic and policy challenges these actors face. Another way governments can work toward sovereignty is to take a more targeted “triage” approach by practically assessing and prioritizing where their countries may face the most significant cultural or socioeconomic vulnerabilities—and, from there, determine where within the AI stack they could most meaningfully benefit from building increased autonomy.
For instance, at the Paris AI Summit, Singapore’s minister for digital development and information, Josephine Teo, shared lessons from Singapore’s journey in establishing greater agency over AI models meant to serve Singaporean citizens. The Singaporean government first identified critical challenges or threats of nonsovereign AI offerings. For example, commercially available language models had been trained on and better served speakers of English than speakers of local Southeast Asian languages. By prioritizing agency over one critical link in the AI value chain, Singapore was able to triage where homegrown investments were most needed (data collection, curation, governance, or model training expertise) and where they could find viable complementarities with other (non-sovereign) stakeholders such as leveraging extraterritorial data centers for model training.
For AI sovereignty to deliver practical gains, not all points along the AI supply chain should be treated as equally urgent. Countries must identify what is most important for their AI needs, and then work from there. For example, is preservation of linguistic heritage a key concern? Then developing a sovereign LLM may be a key national priority, potentially overriding the desire to establish in-country data centers. If historical patterns of cultural extraction pose significant national concern, then increasing sovereignty over data resources may be a more pressing priority than building sovereignty over compute.
To achieve this goal, governments can engage with local researchers and entrepreneurs so that they understand key compute bottlenecks in the current AI chain, which they can then address with more surgical, direct interventions.
Developing national resilience and autonomy around AI also means acknowledging the importance of a country’s human capacity—and ensuring human benefit, not just technological advancement, drives the assessment of what’s needed to achieve greater agency over AI. Prioritizing the cultivation of a well-trained domestic AI workforce and an AI-literate populace can help safeguard against external dependencies. Even beyond providing support for institutions that teach technical AI skills, support can be directed toward institutions to equip lawyers, judges, and civil society to ensure a country’s legal systems can appropriately respond to AI’s challenges by both local and international actors. By choosing to support those in social sciences and humanities, policymakers and public servants, and so many others whose jobs will evolve with AI’s spread, governments can ensure that as AI increasingly intersects with these fields, the public is ready.
At least in the near term, AI sovereignty will prove challenging for most states. In the long term, it could just as easily enable a more level global playing field as it could lead to dire environmental impacts and fragmentation of the global AI ecosystem. But the end goal of ensuring countries can assert greater agency and control over their own AI futures can, and should, be one that is shared across the international community.