Cybersecurity & Tech Foreign Relations & International Law Surveillance & Privacy

Know-Your-Customer Is Coming for the Cloud—The Stakes are High

Kevin Allison, Paul Triolo
Monday, April 29, 2024, 2:00 PM
The comment period for the Commerce Department’s new rules for cloud service providers ends today, and policymakers will sift through the feedback before issuing final rules before the end of the year.
Herbert C. Hoover Building, United States Department of Commerce, Washington, D.C. (Ken Lund, https://www.flickr.com/photos/kenlund/14462623726; CC BY-SA 2.0 DEED, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Are you as non-U.S. citizen using an American cloud provider’s infrastructure to train powerful artificial intelligence (AI) models? If so, the U.S. government wants to know about it. That is the gist of new know-your-customer (KYC) rules for cloud service providers under development by the Commerce Department. 

The rules are the latest in a series of steps taken by the U.S. government to address cybersecurity and AI-related national security risks. These include numerous cybersecurity-related executive orders and strategy documents issued by both Presidents Biden and Trump since 2017, as well as a major push last year to put basic guardrails in place around AI, through initiatives such as the Oct. 30 AI executive order and a series of voluntary safety commitments agreed between the Biden administration and leading AI and cloud companies in August and September of last year. 

The new cloud KYC rules will build on these initiatives by requiring the likes of Amazon Web Services, Microsoft, Oracle, IBM, and Google Cloud to verify who is using their cloud infrastructure and file reports with the U.S. government when “foreign persons” rent their servers to train very large AI models. 

The policy is designed to boost U.S. national security by making it harder for hostile states and non-state actors to use the cloud to train AI systems that could be used for hacking campaigns and other malicious purposes. 

In January, the Commerce Department issued a Notice of Proposed Rulemaking outlining its plans for the new rules and seeking input from industry and other stakeholders. After a comment period that closes today, policymakers will sift through the feedback with an eye toward issuing final rules before the end of the year. 

The rules could give the Biden administration a unique view into who is using U.S. cloud providers’ digital infrastructure to train advanced AI systems. But they could also raise fresh concerns among both allies and adversaries about privacy, other countries’ dependence on America’s biggest tech firms, and the extraterritorial reach of U.S. law. 

Boosting Cybersecurity, Limiting China’s Access to Advanced AI Hardware

“Know-your-customer,” or KYC, is a concept best known in the financial system, where many countries require banks to vet customers and report suspicious transactions to combat money laundering and terrorism financing. Extending the concept to cloud computing is part of a broader trend of the U.S. government tightening oversight of digital technologies to address national security concerns. It is also part of an accelerating push by governments around the world to address perceived risks of artificial intelligence.

The cloud sector is a tempting target for the government to extend KYC rules. Since big internet platforms first started renting out their spare server capacity to other companies in the 2000s, the cloud has become part of the fabric of the global economy. Three U.S. “hyperscalers,” Amazon Web Services, Microsoft, and Google Cloud, currently account for about two-thirds of the roughly $180 billion global market for cloud infrastructure-as-a-service, where platforms rent out servers, data storage, and other information technology (IT) infrastructure for other companies to use. 

So far, China is the only country in the world that rivals the United States in its production of digital giants. Over the past decade, Huawei, Alibaba, and Baidu have begun competing directly with the big U.S. cloud players in some markets, notably in the Global South.

Cloud hyperscalers like these have become indispensable for advancing the cutting edge of AI research and commercialization, in addition to providing a scalable alternative to traditional IT infrastructure for millions of companies around the world. 

A unique combination of deep pockets and heavy concentrations of engineering talent allow cloud hyperscalers to marshal the large clusters of specialized semiconductors that make large language models (LLMs) and other cutting-edge AI systems work. Many of the world’s leading AI developers rely on this infrastructure to train and host their models, and even to subsidize their research and development and through cloud computing credits. This is why, along with the likes of OpenAI, Anthropic, and other foundation model developers, cloud players are at the center of governments’ efforts to address AI risks.

Scrutiny of AI risks has intensified with the advent of LLMs that can write and analyze computer code or create convincing fake audio and video. The upcoming generation of multimodal AI systems capable of performing more complex problem-solving tasks has also fanned concerns in the U.S. national security, intelligence, and AI safety communities that powerful AI could be hijacked by bad actors in ways that pose serious cybersecurity risks. A hostile country or non-state actor equipped with a cutting-edge AI system—or that can access one through the cloud—could potentially use it to write malicious code, scan networks for vulnerabilities, or trick people into revealing passwords.

KYC rules for cloud providers are intended to address these risks by shining a light on who, exactly, is training the most powerful AI systems. In its January notice, the Commerce Department laid out its plans to formalize KYC and reporting requirements as well as details of the “special measures” that it will use to intervene when it identifies a concrete threat. 

The draft rules call for cloud companies to implement customer identification programs and provide details about the types of information that companies will be required to report to the Commerce Department when foreigners use the company’s cloud infrastructure to train a large AI model. The proposed rules would also capture “resellers”—intermediaries that bundle and sell cloud services providers’ infrastructure and services to others. 

Under the Commerce Department’s current proposal, cloud providers would be required to collect data including customer names, addresses, payment details, telephone numbers, and internet protocol addresses, and take steps to identify both direct account holders and their beneficial owners.

They would also be required to report when they or their resellers become aware of foreign customers entering transactions with them to train large AI models, along with details of AI training runs. This would include how much computing power the training run is expected to use, the anticipated start date and completion date, information about the underlying AI model, and information about the foreign user’s cybersecurity practices, including whether it has been subject to security breaches that could give bad actors access to model weights or other potentially sensitive information. 

If that customer is a malicious cyber actor or is in a country with a history of sponsoring cyber mischief, the U.S. government would be able to use “special measures” to force the cloud provider to shut down their account, or stop them from opening one in the first place.

Policy Drivers

This push to address AI-related cybersecurity risks by introducing KYC rules for cloud providers has origins in three related but distinct U.S. technology policy initiatives.

The first is a Trump administration effort to prevent bad actors that break into computer networks from using U.S. cloud infrastructure in their hacking campaigns. In one of his last official acts in office, President Trump in January 2021 signed an executive order directing the Commerce Department to develop new regulations that would require U.S. cloud companies to verify the identities of foreign customers who open or maintain accounts with cloud service providers. 

The order followed revelations in 2020 of a major security breach, commonly known as the SolarWinds hack. Cyber operators linked to Russia exploited a series of software vulnerabilities to gain access to U.S. and allied government computer networks, in part by hijacking Microsoft’s managed services in the cloud. 

Attempts to formalize cloud KYC rules in the wake of the incident stalled early in the Biden administration amid pushback from industry. The more recent cloud KYC push is an extension of this effort, but with a new focus on the use of the cloud to train powerful AI systems that could pose cybersecurity risks.

Significantly, the KYC push also dovetails with the ongoing U.S. campaign to limit China’s access to advanced semiconductors that are used in AI training and high-performance computing. This effort also began during the Trump administration, when the U.S. moved to restrict Chinese mobile networking equipment giant Huawei’s access to a range of U.S. technologies, including high-end semiconductors. Under President Biden, these restrictions expanded into a broader attempt to limit China’s access to advanced chips, focused on high-end graphics processing units, or GPUs, designed by U.S. chip giant Nvidia and manufactured by TSMC in Taiwan.  

Export controls put in place in October 2022 and expanded in October 2023 prohibited Nvidia from selling its most advanced GPUs directly to Chinese companies. But the new rules did not address what is now viewed as a loophole, and a potentially easier route to training advanced AI for Chinese firms: renting access to large GPU clusters in the cloud. The U.S. cloud KYC rules are an attempt to address this gap, by giving the government a window into who exactly is developing the most advanced models, what hardware platforms they are using, where they are being trained, and how they are being tested—along with recourse to “special measures,” where appropriate. 

Finally, the effort dovetails with the wider campaign to put guardrails in place around the safety of “frontier” AI, an issue that was a major topic not only for the U.S. government but also for global partners in the Group of Seven (G7) and venues like the U.K. AI Safety Summit last year. The Biden administration likely sees additional benefits of its cloud KYC push beyond cybersecurity. Once they have been trained using massive amounts of data and computing power, LLMs are capable of performing a variety of tasks. Shining a light on the most powerful AI models that could pose cybersecurity risks could also illuminate potential dual-use risks in other areas, like biosecurity and disinformation. 

The U.S. cloud KYC rulemaking initiative can therefore be seen as an attempt to address three different problems simultaneously: (a) making it harder for foreign malicious actors to use U.S. cloud infrastructure in their hacking campaigns; (b) making it harder for Chinese companies to circumvent U.S. semiconductor export controls; and (c) policing a broader set of perceived safety and national security concerns arising from a new generation of advanced AI systems.

Practical and Strategic Implementation Questions 

The Commerce Department is now deliberately addressing these intertwined AI, cybersecurity, and Chinese competition-related risks. Along with circulating details of the proposed rules, it is seeking input from industry on practical questions, including how much it will cost companies to comply with the KYC and reporting requirements. 

Industry players are likely to use the public feedback window to push for tweaks to the draft rules and seek clarification on how details of implementation will work in practice. One unknown for both companies and policymakers is how companies will verify some of the customer information the government is requesting. While cloud service providers collect some customer information as a matter of course, including customer names, addresses, and other basic details needed to open accounts, and can see when customers are using large amounts of computing capacity, they may have to develop new data collection procedures to gain access to critical information about how models are being trained, procedures for safety testing, and indications of cyber-related breaches that could affect the security of models under development. In this process, cloud companies will likely rely on customers to report what could be considered sensitive information, like their practices for red-teaming AI models and their history of cybersecurity breaches. 

It will take some effort to work out how best to obtain reliable data without saddling companies with major new compliance burdens. It is also unclear whether the data that cloud service providers collect will be sufficient to enable policymakers to determine risks and what to do about any risks that are identified. 

At a deeper level, the government’s cloud KYC push may revive questions about legal authority behind both the cybersecurity and AI executive orders. Both documents lean heavily on the International Economic Emergency Powers Act (IEEPA), a 1970s-era law that grants the president expansive emergency powers, which also undergirds the U.S. sanctions system. During the Trump era, the expansive use of IEEPA—including a threat to use the law to stop U.S. companies from doing business with China, particularly a proposal to use IEEPA to restrict Chinese cloud providers from operating in the U.S. amid calls for “reciprocity”sparked critiques from some legal scholars. One worry was that perceptions that these emergency powers were being overused could lead IEEPA to be challenged in court, with the potential to undermine the broader sanctions system. Depending on how the rules are implemented and enforced, this type of issue could become a concern again. 

Another important question, from a geopolitical perspective, is how foreign governments, including U.S. allies and trading partners, will respond to mandated collection and reporting of this kind of detailed information to the U.S. government. 

Some U.S. partners in Europe and Asia are already uneasy about what they see as an economic overreliance on U.S.—and to a lesser extent, Chinese—cloud giants, particularly at a time when Washington and Beijing have shown that they are willing to exploit digital choke points to advance their policy objectives. 

While U.S. technology export controls and other restrictive measures to date have been aimed primarily at China and, to a degree, Russia and other hostile states, other governments may be wary of a new policy that gives the U.S. government the unilateral ability to order a hyperscaler to cut off AI customers that it deems to present cybersecurity risks without a clear understanding of how such decisions would be made. 

Foreign governments may also see parallels between the new cloud KYC requirements and the CLOUD Act, a 2018 law designed to make it easier for U.S. courts to access data held on overseas servers when it is needed for serious criminal or terrorism investigations. This didn’t go over well in some countries. For example, the CLOUD Act sparked a political and popular backlash in France, where it was perceived as another example of the extraterritorial application of U.S. laws to gain access to sensitive digital information. 

Lingering resentment among the public and policymakers over the 2013 Snowden revelations about bulk U.S. electronic surveillance programs, never far below the surface in Europe, will also color perceptions of the new policy. So will concerns about a potential return to the White House by Donald Trump, who used national security justifications to launch tariffs on European steel and aluminum imports during his term in office. 

To date, the U.S. cloud KYC initiative has not grabbed headlines in Europe or Tokyo in the same way that the CLOUD Act or U.S. semiconductor export controls did. However, concerns have been bubbling up in policy circles. A group of French AI experts recently cited the Oct. 30 AI executive order in a report to President Emmanuel Macron, warning that the U.S.’s plans for a cloud KYC and reporting regime for advanced AI models raised privacy and trade secret concerns, and that it could “reinforce American dominance” by slowing other countries’ development of AI capabilities.  

The pushback in France may partially reflect the influence of Mistral AI, an ambitious and well-funded French AI startup that has the ear of President Macron. But it is likely that other European allies and key partners in other parts of the world, like Japan, share similar concerns about the long-term implications for their access to computing power provided by U.S. cloud giants. Concerns about intellectual property and privacy are also understandable, given the sensitivity of some of the data that would be collected and handed over to the U.S. government.  

A unilateral cloud KYC push may embolden factions within the EU and potentially in other jurisdictions that want to institute restrictive cybersecurity certifications and other protectionist digital policies to reduce the influence of U.S. cloud players and give a leg up to domestic rivals. Concerns about the privacy implications of cloud KYC rules could also give an additional hook for EU privacy activists to challenge the legal status of EU-U.S. data flows. 

Although some details of implementation may be open to negotiation, complaints from industry and concerned U.S. allies are unlikely to derail the U.S. cloud KYC initiative. There is strong momentum in U.S. policy circles for enacting policies that will deliver better visibility into who is using U.S. cloud infrastructure to train powerful AI systems that could pose risks to national security. U.S. officials can argue that the cloud KYC rules are narrowly targeted and initially will apply only to a handful of foreign companies with the financial means and technical wherewithal to train the most advanced AI models. 

Moreover, most if not all of the most important AI companies that are based in Europe or in other like-minded democracies have already voluntarily agreed to share detailed information about model training with the U.S. government, under the voluntary AI safety commitments that were agreed last August. Those commitments formed the basis of a code of conduct for companies working on advanced AI that has set expectations for this type of reporting across the G7.  

Some level of KYC is going to be required, but there will likely be a back-and-forth process with industry that could include exemptions and other measures that would ease some of the compliance burden.

A Multilateral Approach to AI Frontier Governance?

Washington’s initial, unilateral push on cloud KYC will not preclude other countries from taking steps toward what could eventually become a more integrated multilateral approach. This could ultimately be a more palatable way to verify who is training advanced AI systems and how, since it would reduce concerns related to national sovereignty and U.S. government access to potentially sensitive data. Over the past 18 months, the U.S. has attempted to do something similar in semiconductors, with mixed results. After acting unilaterally to enact tough new export controls on shipments of advanced U.S. semiconductors and semiconductor manufacturing equipment to China starting in October 2022, Washington has been struggling to bring the Netherlands and Japan on board by getting them to institute their own new controls. 

The two countries, both close U.S. allies and key players in the market for advanced semiconductor manufacturing equipment, have bristled at the impact of the controls on domestic technology leaders. They are concerned about breaking contracts, potential retaliation from Beijing, and pulling out engineers and risking IP leakage for sensitive equipment. While aligning with Washington to some degree, these allies would prefer a multilateral approach to these types of controls, such as the now largely dysfunctional Wassenaar Arrangement, and have been pushing back on what they perceive as Washington’s shifting goalposts. 

The experience in semiconductors has added to perceptions in many western capitals that the U.S. is increasingly willing to take unilateral action and then pressure allies to comply with its extraterritorial policies. Pursuing a similar approach for training AI models in the cloud could add to these concerns.

There may never be a better time to start the long diplomatic grind toward a multilateral deal on cloud KYC. Over the past year, the issue of developing a global framework for long-term governance of frontier AI models has attracted significant attention—at the U.K. AI Safety Summit in November, which featured representatives of over two dozen countries, including China; in the G7; and at the United Nations.

With significant amounts of goodwill in many quarters for establishing a global approach to safety risks, such as the U.K. Bletchley Park process, there may be an opportunity for the U.S. to work closely with allies, first within a smaller group of like-minded countries and then multilaterally, to establish norms and local reporting mechanisms. 

A multilateral regime based on AI developers reporting to national governments, which could then share selected information with one another based on mutually agreed criteria, might resolve some of the sovereignty, privacy, and intellectual property concerns outlined above. U.S. cloud companies could backstop the process by validating that notifications have taken place but would not be routinely reporting sensitive information about foreign companies’ AI training runs to the U.S. government. 

Governments would have to think carefully about how to handle reporting of large training runs by companies or other entities based in countries that fall outside such an agreement. Either way, dialogue between governments, and with industry players, will be important for finding the right balance. 

Once such a system is in place, the major challenge will be determining how to include Chinese companies and Beijing in the effort, given China’s role as the second largest player in the AI domain in terms of companies, software engineers, and technology stacks. 

Over the long term, any global framework for tracking who is training frontier models will need to determine how to accommodate China in order to be maximally effective. 

A series of follow-up meetings stemming from last year’s U.K. AI Safety Summit and the Bletchley Park process, hosted first by South Korea and then Paris later this year, may be one promising venue for these discussions. Beijing is more likely to agree to a multilateral approach to AI safety that includes cloud KYC than it is to agree to one that comes directly from Washington. Even then, it will be a significant challenge. That makes getting the KYC issue right even more critical.


Kevin Allison is the founder of Minerva Technology Policy Advisors, a geopolitical consulting firm focused on artificial intelligence and other critical and emerging technologies. Kevin is a Senior Advisor at Albright Stonebridge Group and previously helped found Eurasia Group’s geotechnology practice. Earlier in his career, he spent more than a decade as a correspondent and columnist for Thomson Reuters and for the Financial Times, where he covered Silicon Valley. He is based in Washington, DC.
Paul Triolo is a Partner and Senior Vice President for China and Technology Policy Lead at ASG. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world. He is frequently quoted on technology policy issues in media outlets including The New York Times, The Wall Street Journal, The Economist, the South China Morning Post, and others. He speaks regularly at conferences and has authored many journal articles and book chapters on global technology policy and China-related issues. He also serves as a senior associate with the Trustee Chair in Chinese Business and Economics at CSIS.

Subscribe to Lawfare