Published by The Lawfare Institute
in Cooperation With
The newly developing “law of AI” has come to focus on risk regulation, and in many ways risk regulation seems like a good fit for regulating the development and growing uses of AI systems. AI harms tend to be systemic, occur at scale, raise causality challenges for potential litigators, and may not yet be vested (that is, they may constitute risks of future harm rather than current harm)—all challenges for traditional liability regimes.
But as I argue in this paper, risk regulation also comes with what I call “policy baggage”: known problems that have emerged in other fields. Choosing to use risk regulation itself entails making a significant normative choice: to develop and use AI systems in the first place rather than adopt more precautionary approaches to AI. Risk regulation thus embodies what Jessica Eaglin has called a “techno-correctionist” tendency prevalent in scholarship on AI systems: the tendency to try to make technology “better” rather than to question the politics and appropriateness of its usage and to explore more systematically whether, given its harms, it should be used at all.
Regulators should broaden their regulatory toolkit and move away from, or at least add to, the current narrow focus on AI impact assessments. If regulators want to truly address the harms caused by AI systems, they are going to have to do better than light-touch risk regulation.