Building Public Compute for the Age of AI
Governments around the world have been developing ways for public-goods-creating entities to access compute.

Published by The Lawfare Institute
in Cooperation With
In 2024, California was home to an intense debate over SB-1047, a controversial artificial intelligence (AI) regulation bill ultimately vetoed by Gov. Gavin Newsom. A few months later, SB-1047’s sponsors returned with a new bill, SB-53. This bill, which is far less controversial than its predecessor, has only two provisions—of which one is a proposal for CalCompute, a public AI compute reserve for startups and researchers.
CalCompute is not the first such public AI compute reserve to be proposed: Similar reserves have been proposed in New York and the United Kingdom as a mechanism to give researchers, civil society, and other public-sector or early-stage entities access to computing resources for modern artificial intelligence. Indeed, as AI advances, compute will likely serve as a valuable input to scientific knowledge and other public goods with vital benefits to society. But what does giving “access to compute” for these entities truly mean? How can compute be more accessible to smaller entities? Why do these efforts matter for the future creation of public goods? And how are different models around the world trying to accomplish this goal?
Understanding Compute
In general, compute refers to the services, hardware, and infrastructure needed to perform computational operations. In the context of colloquial AI discourse, compute often refers to the specialized chips, such as graphics processing units (GPUs), used in AI data centers to train and run AI models. Compute is perhaps one of the most important resources in the development of large language models (LLMs)—the scaling of training compute, or the amount of computational resources used to develop a model, has historically produced significant increases in model capabilities, while scaling the compute during an AI’s runtime, or inference, as seen with OpenAI’s o-series, can help models “reason” and solve difficult problems, especially in mathematics and programming.
Most of the world’s leading-edge AI compute chips are produced in complex international supply chains, often using chips designed by firms like Nvidia, manufactured by the Taiwan Semiconductor Manufacturing Corporation, and housed in data centers directly or indirectly owned by firms like Microsoft and Amazon. In turn, due to the specialized inputs to chipmaking and high demand, the price of chips is very high—in the words of Amazon CEO Andy Jassy, chips are “the biggest culprit” behind the high cost of AI. As a result, less well-resourced actors, such as universities, lack the budgets to buy enough chips to form state-of-the-art-compute clusters or contract for the use of large numbers of GPUs from large frontier data centers.
This shortage of compute for these entities poses a short- to medium-term major challenge not only for the future of these public entities but also for the future of so-called public goods. Economists define public goods as goods that are non-excludable—meaning nobody can be excluded from reaping its benefits—and non-rival—meaning one person using them does not diminish others’ use of them.
The paradox of public goods, in turn, is that because they are non-rivalrous and non-excludable, it is difficult for the actors creating public goods to capture the financial returns to those goods, requiring them to receive external—often government—support to keep doing so. For example, universities often receive grants to fund scientific research, which can—though it is not always guaranteed to—yield many public goods, like the advancement of scientific knowledge in a field, which can be used by other researchers to drive further advancements, develop valuable drugs and therapies, and more. Outside of universities, nonprofits such as Wikimedia create public goods like Wikipedia to provide open access to broad information, and even social-impact-oriented startups might develop open-source software for a similar purpose. In these cases, actors that focus primarily on creating public goods often capture few financial returns directly.
In turn, this lack of financing creates a problem when it comes to compute. In the age of AI, it is becoming increasingly clear that computers are now one of several vital inputs to many of these public goods. For example, even those skeptical of AI have acknowledged the widespread impact it might have in the sciences, through drug discovery efforts such as AlphaFold2, materials generation models such as MatterGen, and more. The public good of knowledge production at universities increasingly requires AI compute, but the entities tasked with public goods creation lack the financial resources to buy large amounts of chips to do so. The same may soon become true for other public goods creators if AI is deployed widely across the economy.
Public goods creators, therefore—especially universities at present—are in dire need of this resource in order to stay competitive in the age of AI. Sensing this challenge, governments worldwide have been developing ways for public-goods-creating entities to get access to compute. Several distinct models have emerged at the national and subnational levels in the United States, across the national level in China, and through other notable efforts in Europe.
The United States: Donations, Pilots, and Consortia
The United States’s efforts to provide its public-goods-creating entities access to high-quality AI compute has focused primarily on universities, given the immediacy of the compute challenge in the sciences. This effort has occurred at the national and state levels. At the national level, several leading initiatives include the National Science Foundation’s (NSF’s) National AI Research Resource (NAIRR) Pilot, NSF’s Advanced Cyberinfrastructure Coordination Ecosystem (ACCESS), and the Department of Energy’s INCITE program, among others.
NAIRR is perhaps the most well-known program. NAIRR is a pilot program under which the U.S. government partnered with 26 corporate and nonprofit partners to provide U.S. researchers, educators, students, nonprofits, and even small businesses—many public-goods-creating entities—with access to AI compute resources. These resources include federating government supercomputing clusters such as the Department of Energy’s Oak Ridge Cluster, while firms like Microsoft and Nvidia donate cloud computing credits and other resources in order to support universities and other public goods creators.
NAIRR, of course, is not alone in its efforts. NSF runs ACCESS, a program under which NSF provides access to advanced computing systems for which researchers can apply and then use for various scientific needs. The Department of Energy similarly runs INCITE, which provides access to its supercomputers, including the Oak Ridge National Laboratory’s Frontier—the world’s first exascale supercomputer—and more.
These efforts are significant and should be lauded. But they all suffer from a similar challenge: scale. Nearly all the major American programs to give AI compute to public goods creators are too small in size to accommodate the needs of large universities, let alone other public-goods-creating entities, and the future widespread use of AI. For example, simple calculations reveal that NAIRR provides only around 3.77 exaFLOPS of computation power—equivalent to roughly 5,000 H100 GPUs—an amount of computing power that still lags behind industry and is likely too little for the entire U.S. research community’s annual AI needs—let alone for other public entities. This issue threatens to create significant challenges, as it means these entities lack valuable inputs needed to create public goods in the age of AI.
Of course, other entities are trying to supplement the federal effort. For example, California proposed the aforementioned CalCompute, a public computing cluster for researchers and startups. However, SB-53, the bill that proposed CalCompute, does not provide any specifics on the proposal and only sets up a commission to study the effort in the future. New York, meanwhile, launched the Empire AI, an independent research consortium through which the state’s leading universities receive access to high-performance computing (HPC) resources for certain responsible AI research. Again, however, public specifics about it remain limited.
The United States, in many ways, is still in the early days of building national compute access for public-goods-creating entities. The NAIRR Pilot, INCITE, and other programs provide valuable benefits but clearly will not meet the needs of a nation whose vast university system alone is likely to one day need more compute for scientific research, not to mention other potential entities like nonprofits. Private entities, however, like early-stage startups may be able to benefit from the robust private-sector financing available in the United States. If using compute to generate public goods and support public research is a priority, the United States will ultimately need to scale up these efforts for the age of AI, moving beyond pilots to national buildouts that entities such as scientific researchers can access regularly to break ground on new discoveries.
China: Moving Toward a National Computing Network
While U.S. governments are piloting various efforts to foster access to compute for public entities, the People’s Republic of China (PRC) has been much more aggressive in adopting a state-led approach to building AI compute, though these efforts extend beyond just creating compute for public goods providers. The PRC sees AI more broadly as a “new productive force” for economic growth and has, therefore, launched a number of central initiatives to expand its access to computing power.
The first of these programs is the National Integrated Computing Power Network (NICPN). Launched in 2021, NICPN is a megaproject that seeks to optimize and integrate national compute usage in China to conserve and direct it to the most valuable applications. At the heart of NICPN are programs like the China Computing Net (C2NET), which is an integrated grid of AI computing centers, data center clusters, and supercomputers spearheaded by Huawei and the state-run Peng Cheng Laboratory in Shenzhen. These programs seek to make AI compute akin to a public utility, like waterways and electricity, which researchers can pay for and use to complete a variety of scientific research tasks.
In turn, underlying efforts like NICPN are other large-scale projects, notably the East Data, West Computing program. Under this program, China designated eight national computing hubs in regions such as Beijing-Tianjin-Hebei and Guangdong-Hong Kong-Macau, while the data centers servicing those hubs are built in China’s western provinces, such as Gansu (China’s historic data center hub), Inner Mongolia, Ningxia, and more. This approach aims to take advantage of more cost-effective clean energy in western China and use it to build AI compute capabilities to service the rest of China.
Chinese subnational governments have also been active in computing. For example, Chinese cities have launched more than 30 intelligent computing centers, which house GPU clusters for more advanced computing needs. The Chinese government also has a series of state-built supercomputers that have been used for AI research—for example, Peng Cheng Laboratory’s supercomputers were used to train some of Huawei’s Pangu models.
China’s national computing efforts, in turn, have a prominent focus on creating some public goods, particularly scientific research. For example, official Chinese government documents regularly highlight that universities and enterprises are core intended users of these national computing efforts. Such a focus on giving compute to researchers is in line with China’s significant scientific and technological ambitions. However, beyond research, it is worth noting that many of these programs exist not only for entities such as universities—for example, many of those same policy documents also highlight the need to provide such compute to Chinese firms.
Further, while China’s efforts appear impressive, it is important to understand how geopolitics both affects and motivates these efforts. Chiefly, part of the motivation for China’s efforts to optimize national computing resources is American export controls on advanced AI chips to China, which limit China’s access to AI compute. Despite these national efforts, evidence suggests that American export controls may indeed induce shortages of high-end AI inference chips in China, as evidenced by Chinese buyers’ desire to hoard chips like H20s. Such shortages will directly affect the AI compute available to Chinese researchers, which may limit the efficacy of the programs outlined above.
The European Union: A Multilateral Effort
The United States and China are not the only players in the game. Other countries are trying to provide compute to public goods creators, with the most notable third player being the European Union. The flagship European effort to provide compute to public-goods-creating entities is the EuroHPC Joint Undertaking. EuroHPC is a program to share world-class supercomputers across European countries through a network of petascale and pre-exascale computers, including Finland’s LUMI supercomputer, Italy’s Leonardo, and more.
EuroHPC works by allowing researchers from academia, research institutes, public-sector entities, and industry to apply to use some of the computing provided by the cluster. The program is open to academics and public-interest researchers—traditional public goods creators—but companies can also participate, provided they commit to publishing the results of their work. In turn, EuroHPC has a significant budget of approximately 7 billion euros from 2021 through 2027, and provides researchers access to the program free of charge.
Integrating closely with the EuroHPC is a new program, launched in late 2024, called the AI Factories initiative. The initiative creates a series of European high-performance supercomputing hubs and a corresponding AI innovation ecosystem that will reportedly focus on harnessing AI in fields such as health and manufacturing, among others. Europe intends to launch 15 AI factories by 2026, including procuring nine new supercomputers—a tripling of current EuroHPC AI capacity.
Despite Europe’s lofty goals, its efforts to scale AI compute to public goods creators faces major wrinkles. First, the European efforts run into the relative deficiency the continent has in its AI compute supply. The United States is the global leader in access to AI compute, with Chinese firms also having a notable, though smaller, presence. Europe is relatively compute poor by contrast. Even with efforts like EuroHPC and the AI Factories initiative, it is unclear whether such moves are even close to sufficient to close the continent’s AI compute gap.
Europe’s compute initiatives raise another key question: Should global public compute provisions target only creators or public goods? Many of the programs outlined above, such as the AI Factories program, focus intensely not only on supporting researchers but also on supporting European startups and industries. This move is in line with recommendations outlined in the Draghi report, which highlighted the need for Europe to better support new firms especially in high-technology domains. Europe’s AI Factories, therefore, is likely to provide an interesting case study about which organizations should be served by these kinds of compute provision efforts.
***
Of course, other players besides the U.S., China, and the EU are attempting to give their researchers access to compute. India, for example, is trying to build its own scalable compute ecosystem under the India AI Mission, with an AI cloud marketplace for researchers, students, and entrepreneurs. But this program, and many others like it, remains at early stages, primarily because compute access in countries outside the United States, China, and Europe is even more limited.
Together, these efforts paint a disparate picture of the global race to give researchers and other public goods creators access to compute. The United States has had a series of programs and pilots, but they need to be scaled up to meet researchers’ needs. China is embarking on a state-led effort, but export controls may hamper its effectiveness. Europe’s EuroHPC is similarly embarking on an effort targeted at public goods creators (and other players) but faces structural challenges owing to its deficiencies in the computer market.
Regardless of the efforts these governments have made so far, it will be incumbent on them to resolve these challenges. Public goods will not get less important in the age of AI—scientific knowledge, for example, will still prove valuable to unlocking new drugs and cures. The first government to treat compute as an input to these public goods will be the first to spur a wave of innovation, economic growth, and societal revitalization necessary for this new age. States need to see compute as much more than just chips—it’s the inputs to the very public goods on which the societies of the future may depend.