Criminal Justice & the Rule of Law Cybersecurity & Tech

Will AI Produce the Next Great Divergence?

Sarosh Nagar, David Eaves
Monday, May 11, 2026, 2:00 PM
An analysis of AI and institutions.
(Sasha Alalykin, https://shorturl.at/oTUhS; Public Domain, https://creativecommons.org/public-domain/)

In the 1300s, the Black Death swept across Europe, killing 30 to 50 percent of the continent’s population. The loss of nearly one-third of the continent’s labor supply caused wages to rise across Europe, butWestern and Eastern Europe differed in their response. In Eastern Europe, Eastern lords used their power to cap workers’ rising pay, while Western lords did not. As a consequence, in Western Europe the growing power of workers weakened feudalism and opened the door to the modern economy, while serfdom grew stronger in the East.

This example, taken from Daron Acemoglu and James Robinson’s book “Why Nations Fail,” is a textbook case of how a so-called critical juncture—a shock like the plague—magnified a small difference in institutions in a way that profoundly altered societies. Today, Acemoglu and Robinson’s lesson has relevance beyond the plague—it can be applied to the rise of artificial intelligence (AI). Artificial intelligence—and the subsequent technologies it may underpin, including biological research, robotics, and more—present a shock that may be akin to the next Industrial Revolution. If models improve at the rate they have for the past three years, they could accelerate scientific research, reshape the economy, and more. In this view, AI will certainly present another critical juncture.

Yet, despite the growing consensus that AI will be transformative, few have analyzed how AI interacts with institutions. Some literature, for example, does analyze how artificial general intelligence (AGI) may strengthen or erode state legitimacy, highlighting the need for a middle path. Other literature highlights how AI might empower public infrastructure. Yet there is little work analyzing how AI may be a critical juncture and interact with existing institutional structures. Furthermore, if governments—national, state, and local—are not careful, they risk ending up on the wrong side of this juncture, locked out of the technology’s benefits and subject to its harms.

An Artificial Shock

There are four reasons we believe AI might be a uniquely powerful accelerator of divergence in institutions worldwide. First, the technology touches a large number of sectors in distinct ways. AI is being rapidly adopted by firms in sectors from coding to finance, but what the impact looks like by sector will no doubt vary substantially—for example, the interpersonal nature of medicine might mean the profession changes less dramatically, while software engineering is more greatly altered. The result is the possible emergence of a vast array of different institutional structures, especially as countries prioritize institutional adaptation aligned with their national interests—for example, a nation dependent on blue-collar labor supplying the AI boom might encourage expansion of such critical sectors.

Second, the technology is moving fast, while the law moves very slowly. The Model Evaluation and Threat Research (METR) group found that the length of the human software tasks that AI can complete has been doubling every seven months. If research into self-improving AI systems continues, this might accelerate even further. At the same time, laws and institutions have tended to move quite slowly with respect to AI, as technology progresses more quickly than the government’s reaction times, some governments remain hesitant to regulate AI, and additional factors. Consequently, differences in pre-AI legal systems or institutional structures across countries, many of which are slow-moving, influence how institutions adopt and respond to AI. For example, when deepfakes first emerged, a 2024 study found that American social media platforms originally only removed deepfakes reported as copyright infringement, while those reported as privacy violations were allowed to remain up due to the presence of specific laws against the former but not the latter. This approach diverged substantially from China, which passed anti-deepfake regulations in 2022 (though they were often poorly applied), highlighting a real difference in how the institution of social media evolved in response to AI.

Third, there are the massive disparities in use of and access to the technology worldwide. As empirical research has shown, there is a growing AI divide between the Global North and South. This is due to several factors—for example, modern AI development is bound by the scaling laws, which means that nations need sufficient access to expensive AI accelerator chips to do frontier training runs. Using such compute for training is also very energy intensive. Due to the need for chips, energy, and other resources, most large-scale AI development is concentrated in the United States and China, with both countries holding over 80 percent of total global AI supercomputer performance. Even more basic than the AI stack, access to AI is path-dependent on prior access to the internet; currently, 2.2 billion people remain off the internet. Broader disparities in wealth, population, and more also have downstream implications. The result of these combined factors is that institutions in the Global South often struggle to adapt.

The result can create large-scale institutional divergence for several reasons. At the government level, policymakers in some Global South countries might focus more on AI for development, as India did during the recent AI summit, while other actors have different priorities, such as Europe’s focus on frontier regulation. These competing priorities result in different laws, bureaucracies, and institutional structures. At the people-to-people level, social institutions, such as norms around AI use, may be slower to develop in places where access to the technology is limited. The fact that there is a conceivable future where some countries build bureaucracies to adopt and regulate superhuman AI while others focus on internet deployment should highlight how vast institutional divergence might be.

Fourth, there is a massive epistemic disparity among institutional policymakers with respect to AI. In many revolutions or shocks, the epistemic view people had of the shock was similar—few governments in 2020 would disagree with the idea that COVID-19 was an infectious disease. Yet AI has a much more radical divergence in how institutions or governments view the technology. Some governments around the world clearly see AI as a uniquely transformative technology—for example, the United Arab Emirates has spent huge sums on AI chips, talent, and more, seeing the technology as a way to industrialize. Others might show greater concerns about large-scale harms from AI, like cyberattacks, as the UK does with its AI Security Institute. Meanwhile, other governments worldwide believe that, while AI will be transformative, the most realistic harms may be shorter-term harms like labour disruption, rather than large-scale or catastrophic harms.

The epistemic worldview in which policymakers live greatly magnifies how institutions might diverge; each government may radically adapt institutional structures to reflect its priorities. The United Arab Emirates’ AI optimism certainly played a role in its decision to appoint the world’s first AI minister. U.K. policymakers’ concerns around AI harms caused it to set up its AI Security Institute (AISI), building the world’s leading institution for AI testing. Meanwhile, governments that reject framings of imminent risks are focusing their institutional effort on issues like enhancing AI diffusion, focused on deploying AI only in particular instances. This epistemic divergence can cause countries to set up radically different institutional structures based on their view of the technology.

Of course, institutions could choose to put their heads in the sand and refuse to adapt to AI. However, if institutions don’t adapt, they risk a few key problems. First, there is a chance many institutions themselves might become less relevant—many in higher education, for example, worry that AI may undermine the value of universities by eroding the academic integrity of existing assessment practices. These institutions must, therefore, adapt to ensure their continued relevance.

Second, even if AI does not render these institutions outright irrelevant, non-adaptation risks implicitly accepting a chosen digital future—just the one that current pre-AI laws and institutions would create by default—which may cause real-world harm. For example, the aforementioned example of copyright laws and deepfakes highlights that, absent adapting legal institutions to tackle such issues, the proliferation of harmful deepfakes may negatively impact those the institution is supposed to protect. Third, beyond harm, failing to adapt can deter the beneficial adoption of the technology—one survey by the European Commission found that firms report that legal barriers around issues like liability were some of the top hurdles to AI adoption in previous years. The result risks a nation leaving its citizens exposed to harm while failing to help them benefit from AI.

Global Divergence

Given that AI may cause institutions to diverge, it raises the question of what this might look like on a global scale. A few different scenarios are possible—none of them mutually exclusive. If AI capabilities grow rapidly toward highly capable agents and superintelligence but remain concentrated in the Global North, global institutional divergence might be quite significant due to the disparities in the technology. Institutional bureaucracies will develop in the countries in which AI progresses rapidly, but not in others.

The United States—and likely China—will develop institutions for regulating agentic commerce and establishing technical infrastructure for AI agents, new revenue structures for economies like strong AI systems, and bureaucracies for coordinating with the frontier labs. Outside of those leading countries, there will be countries that still seek to expand national internet access, while other nations, like India, might combine their existing digital public infrastructure stack with AI agents in a third model.

In alternative worlds, where AI capabilities grow slowly and diffuse rapidly worldwide, we can imagine much less divergence. Rapid international diffusion means the technology is present across more countries, while slower growth gives governments worldwide not only time to adapt but also the opportunity to learn from cutting-edge actors on best practices. Institutional adaptation might resemble responses to technologies such as the internet—standard setting internationally, targeted interventions for economic reskilling, and more—which are likely to produce divergence, though a less extreme version of it. The general rule, in our view, is that the more rapidly these capabilities develop and the more concentrated they become, the more institutions globally will diverge.

How Governments Should Respond

What should governments do to avoid ending up on the wrong side of this divergence? There are four critical steps. First and most important, institutions—public or private—need to get an epistemic handle on where the technology is and the implications it has for them. Many governments, financial institutions, and others are not tuned in to debates at the frontier of AI (even if they do invest in technical personnel), meaning that the technical information they receive may be months or even years behind the cutting edge.

This means that governments and institutions should find ways to close this gap, which can vary substantially depending on the resources available to each country, geopolitical conditions, and other factors. Some countries might establish hubs in zones of AI progress, as the U.K. did with its AI Security Institute (AISI) in Silicon Valley, to be better read-in on frontier information. Other countries might launch in-house labs to experiment with the technology directly, as with France’s in-house public-sector ALLiaNCE incubator, to reach the frontier through sustained learning and testing. A third option might even be striking deals to join existing information sharing agreements, such as those the U.S. Center for AI Standards and Innovation (CAISI) has with the U.K. AISI.

The specific approach that is best will depend on each government’s situation. Of course, being at the frontier does not require governments to take a particular view on AI or its future. Rather, the goal of these efforts should be to ensure that governments are aware of debates happening at the frontier, such as those about whether transformers architectures may face limits, potential growth in inference compute demand, and more.

Second, governments and institutions need to get data on how AI is being diffused throughout society. Good institutional adaptation to AI will focus on promoting positive uses of the technology and curtailing bad ones, which requires data on how AI is used in the first place. A bipartisan letter by Sens. Mark Kelly (D-Ariz.), Mark Warner (D-Va.), Josh Hawley (R-Mo.), and colleagues urging for better federal data on AI’s impact on the workforce is one positive example. Another example is the New Delhi Frontier AI Commitments signed at the recent India AI Impact Summit, which encourages participating organizations to disclose AI usage data as well. This kind of data can enable an understanding of AI’s economic impacts and enable better formulation of new policies, laws, and other institutional efforts. Collecting such aggregate data on how society is adopting AI is challenging—it is expensive, it can take resources away from other efforts, and it may not be clear which data is the best to use. However, collecting aggregated, anonymized data on the state of AI diffusion, even if imperfect, is better than relying on assumptions or inference.

Third, with that data, institutions, both public and private, should begin to actively use futures and foresight methods to plan for different scenarios for AI. This means assessing what happens if AI diffuses across the economy rapidly versus slowly, if AI capabilities reach superhuman capabilities, and more. This kind of epistemic scenario planning is what ensures that, even if a government or a financial institution thinks AI is going one way or another, they are effectively prepared to adapt to any scenario. The RAND Corporation, for example, has conducted many of these scenario-planning exercises. While the time investment for these projects is considerable, understanding the options is critical given the scale and scope of AI progress.

Fourth, for the Global South, efforts to close the digital divide quickly—and creatively—are vital. This means active investment in promoting digitization of public and private institutions where appropriate, beginning to experiment with AI use cases, and more. This does not necessarily mean copying the United States or China—India’s innovative use of digital public infrastructure for digital identification and payments, for example, provides one instance of how Global South countries can harness digital technologies to promote development in creative, contextual ways that frontier players do not. AI equivalents could look at establishing agentic commerce platforms for informal or underbanked sections of the economy, multilingual translation platforms like India’s Bhashini, and other initiatives. Of course, this kind of investment can have more significant trade-offs, especially given competing priorities for Global South governments to invest in health, education, or economic development, but such trade-offs are necessary. Much like India’s United Payments Interface has transformed financial inclusion for millions of underbanked Indians, similarly designed interventions with AI agents could be a powerful tool for growth and public benefit.

Yet there is a deeper question: What institutional structures will ultimately win out? Which winners might be most surprising? While difficult to predict, one key characteristic of winning institutions is that they rapidly recognize and shape other winning institutions to benefit society. For example, in the Industrial Revolution, the economic institution of the corporation flourished because no other institution could provide the needed capital to finance industrial growth. However, the institutions that thrived alongside it were those that helped support and shape it—for example, British fire insurance companies helped enable corporations’ industrial buildout into the backbone of a modern economy, which in turn helped birth the modern insurance industry. In this case, the winning institution of the corporation was supported and shaped by other key societal institutions.

This notion raises an insightful point: Achieving a good outcome at a critical juncture is about the achievements of not just one institution but rather an ecosystem. For an institution to adapt successfully, it requires a network of mutually reinforcing institutions shaping each other’s adaptation and development, much like insurance companies and corporations did. Indeed, the example makes clear that adapting to a critical juncture like AI is not an effort that can be undertaken by one siloed organization—rather, it must be a whole-of-society mobilization.


Sarosh Nagar is a Marshall Scholar and researcher at University College London. His research focuses on AI and its impacts on society. His work has previously been published in Foreign Policy, The Hill, and The Diplomat and cited by the United Nations Development Programme (UNDP). The views outlined in his articles are solely made in his own personal capacity.
David Eaves is Associate Professor of Digital Government and Co-Deputy Director of the Institute for Innovation and Public Purpose at University College London. David researches digital transformation and digital public infrastructure. He co-founded Teaching Public Service in a Digital Age, an open-source syllabus used by hundreds of faculty to teach the minimum viable knowledge public servants need on technology to be effective. He also co-founded a startup that grew to serve over 400 governments.
}

Subscribe to Lawfare