Cybersecurity & Tech

Offshore: The Coming Global Archipelago of Corrosive AI

Brian Nussbaum
Wednesday, June 14, 2023, 4:00 AM
Regulating artificial intelligence may be much harder than many imagine, challenges controlling money laundering and financial crime illustrate why
ChatGPT, the AI program suspended in Italy in March by its parent company following regulatory questions. (Daniel Foster,; CC BY-NC-SA 2.0,

Published by The Lawfare Institute
in Cooperation With

A recent opinion piece in Time drew scathing criticism when it suggested that, in an effort to control countries violating a moratorium to prevent artificial intelligence (AI) run amok, it might be necessary to “…destroy a rogue datacenter by airstrike.”  While most people found such suggestions to be disconcerting or hyperbolic, the broader idea that artificial intelligence is likely to be regulated in various ways does not seem at all unlikely. Countries such as the United States, and supra-national bodies such as the European Union, will likely be regulating artificial intelligence as its impacts—social, political, life safety, intellectual property—are significantly felt by their people and economies. That said, there is currently not any consensus on what this regulation would look like. Attempts to control and strategically harness AI and related technologies are often compared to various other technologies from nuclear weapons to cyber-attack capabilities. One underutilized metaphor for this problem, however, is that of money-laundering. 

In the same way that a global financial sector, with infrastructure around the world, has enabled criminals, spies, and the very wealthy to avoid taxesloot national treasurieshide assets, and evade enforcement; so too will a global technology sector have the potential to create a never ending chain of “jurisdictional arbitrage” opportunities where AI entrepreneurs and companies can evade regulation. This is commonly referred to as the “offshore” finance industry, a network of firms and providers that enable financial crime, corruption, and tax evasion on a global scale. 

For example, a trust in Liechtenstein can own a shell company in Jersey that controls a series of smaller shell companies registered in the Cayman Islands, that respectively own a yacht in Sardinia that is titled in Panama, and an apartment in Miami, all for the beneficial ownership of an oligarch made rich by looting funds from a Russian mining conglomerate—which ultimately enables that oligarch to avoid taxation, justice, and the tax authorities in all of those jurisdictions. If countries begin regulating AI effectively, so too will there be an “offshoring” of AI in a mess of regulatory arbitrage.   

Imagine a firm that wants to use a text-generating AI model with no protections built-in to create misinformation to attempt to swing a national election in a neighboring country. The firm wants to sell that content to one of the political parties competing in the election. They might not be allowed to do this in their home country because of regulation on the use of AI models and infrastructure. However, a holding company based in Bermuda could run an algorithm that it licenses from a firm in Ireland, for example, on training data purchased from a country with minimal online privacy protections, in a data center that it leases in Brazil, while channeling the profits from the sales of its services to shell company in Guernsey, where they will then enter the offshore financial world.  

The global regulatory landscape around AI is very spotty and patchy, has various kinds of uncertainty, and changing very rapidly; for example, the General Data Protection Rule in Europe includes various requirements and protections that do not apply in other countries. This is a recipe for an army of lawyers and consultants finding ways to exploit what is legal in one jurisdiction and illegal in another to enable the owners and operators of these sorts of systems to skirt legal liability and constraints on behavior. Not all AI owners and operators will do this, as not all financial firms take advantage of questionable methods of offshore finance, but it’s likely that enough will that it will seriously undermine such regulation. Already, large operators have made changes to offerings and availability based on regulatory questions, such as when OpenAI temporarily cut off access to ChatGPT in Italy. Right now these systems, or at least the most effective of them, are large and expensive and tend to be owned and operated by large firms with much to lose—both legally and reputationally—from irresponsible behavior. The idea that such systems won’t proliferate to smaller and less constrained organizations is hard to imagine; these models will likely get smaller, less resource intensive, and proliferate in such a way that use and abuse become more likely.

AI and finance, and especially illicit AI and illicit finance, have much in common. Both move more or less at the speed of electrons, typically feature assets that are hard to find or seize across borders, require complicated cross-jurisdictional cooperation to investigate, and thrive on a regulatory patchwork that enables a “race to the bottom” in terms of compliance. The United Nations has acknowledged the challenges that arise from attempts to regulate AI and related data analytics across borders in a presentation called AI Ethics in Cross Border Commerce, saying “no nation alone can regulate artificial intelligence because it is built on crossborder data flows.” 


It’s important to note that this is not just an international dynamic. While countries sometimes compete for financial business in ways that enable destructive jurisdictional arbitrage, so too do sub-national units such as states. Whether someone is incorporating a company (Delaware), setting up a trust (South Dakota), or avoiding taxes based on your residence (Wyoming), states enable problematic behavior based on their willingness to serve as “offshore” destinations onshore. There is little reason to think that state level AI restriction would fare much differently. 

There have already been numerous attempts to regulate various types of technologies at the state and local level, ranging from digital assets and cryptocurrencies, to social media platforms, to mobile phone applications. These attempts have had various levels of success, as well as various levels of public and industry pushback. That said, if the world of “offshore finance” (including the domestic “onshore” variations among states) are any indicator, it seems hard to imagine that very effective regulation will be put in place. In a world in which countries and sub-national jurisdictions such as states and provinces are competing to attract investment and investors, observers can expect to see at least some level of the race to the bottom dynamic described above. 

This competitive dynamic does not only impact the ethereal and digital realms like finance and AI; it also emerges in the physical world when ownership and liability can be likewise arbitraged across borders. Large ships that cross the world’s oceans are—somewhat strangely—disproportionately registered in a few small countries such as Panama and Liberia.  While not typically operated from these tiny countries, the ships are legally domiciled there under “flags of convenience” in order to skirt costly and comprehensive safety requirements, tax and sanction regimes, requirements around working conditions, and general regulatory scrutiny.  Importantly, the small countries that offer owners a freer hand, do in fact benefit extensively from their competitive choices. The idea that owners of such ships—which are massive and easily trackable physical assets—have managed to create global systems of regulatory arbitrage, suggests that constraining the ownership and operations of virtual and digital assets like artificial intelligence systems seems like it would be even more challenging.

Attempts to navigate and negotiate the many legal and compliance regimes that seem likely to emerge around AI will be similar to the attempts companies and rich individuals have undertaken to manage the similarly baroque financial landscape. And like the world of offshore finance, this process will introduce all sorts of operational, reputational, and legal risks. In the same way that the presence of dirty money can taint an otherwise clean financial institution, illegally obtained training data or algorithms that do not respect the law, will present risks to technology companies. Like the complicated supply chains around precious metals—where illegally obtained gold finds its way into the broader legal market—it will often be hard for regulators to see where illegal or restricted data or services enter the AI ecosystem, especially when ownership and control is distributed across many countries or jurisdictions. 

Like money-laundering, the answer will come in some interesting and complicated places. Large private technological institutions will be deputized to improve compliance, as financial institutions have been in the anti-money laundering (AML) space. Multi-jurisdictional and multi-disciplinary task forces and agencies will be required to investigate violations of these regulations—like they are for cyberfinancial crimenarcotics trafficking, and other cross-border criminal activity. In addition to national level efforts, as in the fight against financial crime, cross-national efforts by transparency and corruption related non-governmental organizations , and intergovernmental organizations will be key to managing these challenges. Also similar to the offshore finance industry, whistleblowers and journalists will be key to both maintaining some level of scrutiny and exposing the more serious abuses.  There won’t be a single organization or country that documents, investigates, and holds accountable those who use geographical and legal borders to skirt safety, privacy, and consumer protection requirements; like the fights against money laundering and tax evasion, it will ultimately be a collective effort. 

This collective effort will require mixes of cooperation and various kinds of coercion, and it is worth being frank that it may not work very well.  For many of the same reasons that efforts to counter money laundering have not been overwhelmingly successful, there are likely to be real problems in regulation and enforcement around cross-border malicious AI.  Strong financial interests from resource-rich companies and wealthy individuals, combined with the legion of lawyers, accountants and consultants they employ, nations in competition for political and economic gains, baroque and byzantine rule sets that change much slower than technology, and the ability of data (like money) to move across borders quickly and in ways that are hard to monitor. The reasons that illicit financial flows are still a major problem, one that shows few signs of abating, suggest that illicit AI will likely be no easier to reign in. 

Ultimately, the dangers of the offshoring of corrosive AI are coming. Quickly.  

Dr. Brian Nussbaum is an Assistant Professor at the College of Emergency Preparedness, Homeland Security, and Cybersecurity (CEHC) at the University at Albany. He is also an affiliate scholar with the Center for Internet and Society at Stanford Law School. He formerly worked as an intelligence analyst with New York State's homeland security agencies.

Subscribe to Lawfare