Cybersecurity & Tech

The False Choice in the Debate Over Artificial Intelligence Regulation

Matthew Tokson, Yonathan A. Arbel, Albert Lin
Friday, April 26, 2024, 9:54 AM

Should regulators focus on present-day or potential future AI risks? Both.

Individuals working on computers (Collections École Polytechnique / J. Barande, https://commons.wikimedia.org/wiki/File:Symposium_Cisco_Ecole_Polytechnique_9-10_April_2018_Artificial_Intelligence_%26_Cybersecurity_(40466246635).jpg; CC BY-SA 2.0 DEED)

Published by The Lawfare Institute
in Cooperation With
Brookings

Since the release of ChatGPT3.5 roughly a year and a half ago, the promises and perils of artificial intelligence (AI) have captured the world’s attention. We are currently in the midst of a vigorous debate about which AI harms to focus on—those occurring now or more speculative harms that might happen in the future. This argument started on social media but has now reached major scientific journals and prominent news sources. It increasingly threatens to undermine calls for robust AI governance. Political momentum for meaningful AI regulation is unlikely to materialize if those calling for regulation cannot agree on why AI is so dangerous in the first place. 

Yet the debate over which AI harms to target and which to ignore is based on a questionable premise, and it presents a false choice. In practice, recognition of short-term and long-term AI risk is mostly complementary, with each type of risk strengthening the case for systemic AI regulation. As we argue in a recent law review article, effectively addressing serious AI harms will require regulatory oversight at every major stage of the AI process, from design to training, to deployment, to post-deployment fine-tuning. This is as true of present-day harms as it is of potential future risks. And many of the regulatory steps necessary to address short-term harms are important first steps for regulating advanced future AI systems. We do not need to choose.

The Present and Near-Future Harms of AI

AI threatens serious harms in the present and the near future, many of them inherent in the technology itself. 

Algorithmic decision-making based on historical data can project historical inequality into the future, as past discriminatory patterns are incorporated into present and future decisions. For example, a model assigned to review resumes for a tech company might downgrade women candidates and upgrade men, much as Amazon’s hiring algorithm did. After all, in the historical data, men were hired more frequently. The result was an ongoing discriminatory cycle for historically marginalized groups. 

AI also endangers privacy because it enables gleaning intimate insights about people’s lives from seemingly innocuous, freely available data. As machine learning has become more sophisticated, it has allowed companies to gain more insight into consumers and their behavior via advanced pattern recognition and data analysis. Each of us generates voluminous data as we use our various electronic devices. Companies can collect or purchase this data and process it using AI to infer sensitive information about our lives, including our health conditions, political affiliations, spending habits, content choices, religious beliefs, and sexual preferences. 

Further, in the medium-term future, AI may displace huge proportions of the workforce without creating new jobs for which humans are better suited. The socioeconomic consequences of this trend may be exacerbated by the possibility that the economic benefits of AI may accrue largely to a concentrated few while potentially enormous costs fall on workers. Historically, the displacement of existing workers by new technologies has been counterbalanced by the demand-increasing effects of productivity growth and the eventual creation of new tasks where human labor has a comparative advantage relative to machines. Such dynamics are not guaranteed, and AI threatens to impede or end this counterbalancing process by radically shrinking the number of tasks that humans can perform more effectively than machines. Additionally, even if AIs never achieve human-level capabilities, employers may find they are far more cost-effective than human workers across a huge variety of occupations and tasks. If widespread displacement occurs, our current social frameworks are ill suited to guarantee the well-being of the multitude of displaced workers or to address the resulting economic and social inequality.

Longer Term AI Risks and the Difficulty of Alignment

Tech company executives who focus only on long-term risks while fighting meaningful regulation today are downplaying the serious dangers posed by today’s AIs. But that does not mean the long-term risks are negligible. Rather, as an ever-growing proportion of scientists, programmers, and observers have come to realize, AI poses massive, potentially existential risks to humanity in the long term. There is little reason to think that AI progress is about to grind to a permanent halt; more likely, it is just beginning. 

One can point to the limits of transformers, slowdowns in Moore’s law, or the limits of the current training paradigms, yet there is currently substantial progress and innovation on virtually all fronts. New chip designs, new model architectures, new sources of data, new data preprocessing methods, new model modalities, new training optimizations, new methods of model compression (like quantization), new methods of fine-tuning, new prompting techniques—the improvement frontier is vast. Even if AI development continues to move in fits and starts, with long “AI winters” interspersed with shorter periods of technological breakthroughs and rapid development, AIs are nonetheless likely to improve further over time. 

If and when highly capable AIs arrive, it will likely be incredibly difficult to align them with human goals and norms. AI models optimize for effectiveness and efficiency with respect to pursuing their goals, but they may achieve their goals in ways that undermine their designers’ intent. Articulating a goal for a complex AI model that encapsulates what designers truly want the model to achieve can be extremely challenging. Any space between designers’ actual goals, with all their nuance and complexity, and the specifications given to an AI system can cause serious misalignment. Further, even assuming perfectly specified goals, advanced AIs given autonomy and extensive interfaces with the real world could cause substantial problems. An autonomous AI might exploit various aspects of its environment, overuse resources, cause safety hazards, deceive its users, or otherwise engage in unwanted behavior as it pursues goals. The more complex and capable an AI system is, the more harm it might cause if it becomes misaligned.

We are already struggling to get today’s AIs to do what we want in the way that we want it, without unexpected consequences. Most famously, ChatGPT seems to go “insane” from time to time, giving bizarre or nonsensical answers and insulting or even threatening its users. Or consider the AI system trained to sort data quickly that realized the quickest way to sort data was to delete it all. Or the system that learned to pretend it was inactive to avoid the scrutiny of the researchers in charge of it. AI alignment research has lagged far behind capabilities research, and given the priorities of the companies racing to lead the AI sector, that is likely to continue.

The Case for Comprehensive AI Regulation 

Recognizing multiple categories of AI risk is helpful in both practical and political terms. Regulating AI with a view toward immediate harms can lay the groundwork for future regulation of more advanced AI. Once initial AI regulations are in place, lawmakers can address new threats by amending existing laws rather than creating new legislation from whole cloth. Systemic AI regulations targeting present-day harms might require government prescreening for AI models, giving regulators a better chance to identify dangerous systems before they are deployed. Other laws focused on immediate harms might deter open-source or other hard-to-regulate forms of AI development, reducing tortious practices and risky developmental approaches.

On the flip side, acknowledging the potentially catastrophic risks of AI can help justify systemic AI regulation in the present day. Recognizing widespread concerns about catastrophic AI harms can bring attention, political momentum, and financial resources to the cause of AI regulation. It can motivate people and policymakers who may not normally be concerned about discrimination or privacy harms to support comprehensive AI regulation that can address those concerns. More broadly, the ultimate costs and benefits of AI are uncertain, and so is AI’s potential for existential harm. But acknowledging long-term harm as a real possibility can help resolve any ambiguity regarding the appropriateness of regulation.

Consider, as an example of this dual (long-term and short-term) purpose, proposals that would require red teaming of models. On the one hand, they verify that the model does not produce toxic outputs. On the other hand, the procedures involved in red teaming are also useful in detecting how models behave in novel situations, how manipulable the model is, and the degree of potentially dangerous information the model contains. And rules regarding ownership and corporate governance structures, for instance, are useful in ensuring that AI labs will comply with domestic law, as well as in preventing race-to-the-bottom dynamics around releasing potentially unsafe models. Tort liability for downstream harms, such as liability for autonomous vehicles, can instill a broader sense of accountability and careful testing of models in new environments.

***

To be sure, particularized AI regulations of downstream applications are also called for in many areas, but there is little reason to think that addressing one category of AI risk will impede addressing others. A political culture that recognizes AI risk in one area is more likely to be open to recognizing it in another. Identifying the issue and getting it on the policy agenda is the difficult step, and infighting is likely to hinder that effort. 

The systemic regulation of AI is necessary to prevent the most serious AI harms, present and future. It should become a common ground between both camps in the AI regulation debate.


Matthew Tokson is a Professor of Law at the University of Utah S.J. Quinney College of Law, writing on the Fourth Amendment and other topics in criminal law and procedure. He is also an affiliate scholar with Northeastern University's Center for Law, Innovation and Creativity. He previously served as a law clerk to the Honorable Ruth Bader Ginsburg and to the Honorable David H. Souter of the United States Supreme Court, and as a senior litigation associate in the criminal investigations group of WilmerHale, in Washington, D.C.
Yonathan Arbel is the Silver Associate Professor University of Alabama School of Law and Director of the AI Studies Initiative.
Albert Lin is a Professor of Law at University of California, Davis School of Law.

Subscribe to Lawfare