AI Will Automate Compliance. How Can AI Policy Capitalize?
AI may soon automate regulatory compliance itself—making regulations contingent on whether compliance can be reliably completed by AI tools.
Disagreements about artificial intelligence (AI) policy can seem intractable. For all the novel policy questions that AI raises, there remains a familiar and fundamental (if contestable) question of how policymakers should balance innovation and risk mitigation. Proposals diverge sharply, ranging from pausing future AI development, at one end, to accelerating AI progress at virtually all costs, at the other.
Most proposals, of course, lie somewhere between, attempting to strike a reasonable balance between progress and regulation. And many policies are desirable or defensible from both perspectives. Yet, in many cases, the trade-off between innovation and risk reduction persists. Even individuals with similar commitments to evidence-based, constitutionally sound regulations may find themselves on opposite sides of AI policy debates, given the evolving and complex nature of AI development, diffusion, and adoption. Indeed, we, the authors, tend to locate ourselves generally on opposing sides of this debate, with one of us favoring significant regulatory interventions and the other preferring a more hands-off approach, at least for now.
However, the trade-off between innovation and regulation may not remain as stark as it seems. AI promises to enable the end-to-end automation of many tasks and reduce the costs of others. Compliance tasks will be no different. Paul Ohm recognized as much in a recent essay. “If modest predictions of current and near-future capability come to pass,” Ohm expects that “AI automation will drive the cost of regulatory compliance” to near zero. That’s because of AI tools’ suitability to regulatory compliance costs. AI systems are already competent at many forms of legal work, and compliance-related tasks tend to be “on the simpler, more rote, less creative end of the spectrum of types of tasks that lawyers perform.”
Delegation of such tasks to AI may even further the underlying goals of regulators. As it stands, many information-forcing regulations fall short of expectations because regulated entities commonly submit inaccurate or outdated data. Relatedly, many agencies lack the resources necessary to hold delinquent parties accountable. In the context of regulations, AI tools may aid in both the development of and compliance with several kinds of policies, including adoption of and ongoing adherence to cybersecurity safeguards, adherence to alignment techniques, evaluation of AI models based on safety-relevant benchmarks, and completion of various transparency reports.
Automated compliance is the future. But it’s more difficult to predict when it will arrive, or how quickly compliance costs are likely to fall in the interim. This means that, for now, difficult trade-offs in AI policy remain: In some cases, premature or overly burdensome regulation could stifle desirable forms of AI innovation. This not only would be a high cost in itself but also would postpone the arrival of compliance-automating AI systems, potentially trapping us in the current trade-off between regulation and innovation. How, then, should policymakers respond?
We tackle this question in our new working paper, “Automated Compliance and the Regulation of AI.” We sketch the contours of automated compliance and conclude by noting several of its policy implications. Among these are some positive-sum interventions intended to enable policymakers to capitalize on the compliance-automating potential of AI systems while simultaneously reducing the risk of premature regulation.
Automatable Compliance—and Not
Before discussing policy, however, we should be clear about the contours and limits of automatable compliance. We start from the premise that AI will initially excel most at computer-based tasks. Fortunately, many regulatory compliance tasks fall in this category, especially in AI policy. Ohm notes, for example, that many of the EU’s AI Act’s requirements are essentially information processing tasks, such as compiling information about the design, intended purpose, and data governance of regulated AI systems; analyzing and summarizing AI training data; and providing users with instructions on how to use the system. Frontier AI systems already excel at these sorts of textual reasoning and generation tasks. Proposed AI safety regulations or best practices might also require or encourage the following:
- Automated red-teaming, in which an AI model attempts to discover how another AI system might malfunction.
- Cybersecurity measures to prevent unauthorized access to frontier model weights.
- Implementation of automatable AI alignment techniques, such as Constitutional AI.
- Automated evaluations of AI systems on safety-relevant benchmarks.
- Automated interpretability, in which an AI system explains how another AI model makes decisions in human-comprehensible terms.
These, too, seem ripe for—at least partial—automation as AI progresses.
However, there are still plenty of computer-based compliance tasks that might resist significant automation. Human red-teaming, for example, is still a mainstay of AI safety best practices. Or regulation might simply impose a time-based requirement, such as waiting several months before distributing the weights of a frontier AI model. Advances in AI might not be able to reduce the significant costs associated with these automation-resistant requirements.
Finally, it’s worth distinguishing between compliance costs—“the costs that are incurred by businesses ... at whom regulation may be targeted in undertaking actions necessary to comply with the regulatory requirements”—and other costs that regulation might impose. While future AI systems might be able to automate away compliance costs, firms will still face opportunity costs if regulation requires them to reallocate resources away from their most productive use. While such costs are sometimes justified by the benefits of regulation, these costs might also resist automation.
Notwithstanding these caveats, AI will eventually reduce certain compliance costs by a significant margin. Indeed, a number of startups are already working to automate core compliance tasks, and compliance professionals already report significant benefits from AI. However, for now, compliance costs remain a persistent consideration in AI policy debates. Given this divergence between future expectations and present realities, how should policymakers respond? We now turn to this question.
Four Policy Implications of Automated Compliance
Automatability Triggers: Regulate Only When Compliance Is Automatable
Recall the discursive trope with which we opened: Even when parties agree that regulation will eventually be necessary, the question of when to regulate can remain a sticking point. The proregulatory side might be tempted to jump on the earliest opportunity to regulate, even if there is a significant risk of prematurity, when they assess that the risks of belated regulation would be worse. The deregulatory side might respond that it’s better to maintain optionality for now. The proregulatory side, even if sympathetic to that argument, might nevertheless be reluctant to delay if they do not find the deregulatory side’s implicit promise to regulate someday credible.
Currently, this impasse is fought largely through sheer factional politics that often force rival interests to support far-reaching policies: The proregulatory side attempts to regulate when it can, and the deregulatory side tries to block them. Of course, factional politics is inherent to democracy. But a more constructive dynamic might also be possible. In our telling, both the proregulatory and deregulatory sides of the debate share some important common assumptions. They believe that AI progress will eventually unlock dramatic capabilities, some of which will be risky while others will be beneficial. These common assumptions can be the basis for a productive trade. The trade goes like this: The proregulatory side agrees not to regulate yet, while the deregulatory side credibly commits to regulate once AI has progressed further.
How might the proregulatory side make such a credible commitment? Obviously, one way would be to enact legislation effective at a future date, possibly several years out. But picking the correct date would be difficult given the uncertainty of AI progress. The proregulatory side will worry that that date will end up being too late if AI progresses more quickly than predicted, and vice versa for the proregulatory side.
We propose another possible mechanism for triggering regulation: an automatability trigger. An automatability trigger would specify that AI safety regulation is effective only when AI progress has sufficiently reduced compliance costs associated with the regulation. Automatability triggers could take many forms, depending on the exact contents of the regulation that they affect. In our paper, we give the following example, designed to trigger a hypothetical regulation that would prevent the export of neural networks with certain risky capabilities:
The requirements of this Act will only come into effect [one month] after the date when the [secretary of commerce], in their reasonable discretion, determines that there exists an automated system that:
- can determine whether a neural network is covered by this Act;
- when determining whether a neural network is covered by this Act, has a false positive rate not exceeding [1%] and false negative rate not exceeding [1%];
- is generally available to all firms subject to this Act on fair, reasonable, and nondiscriminatory terms, with a price per model evaluation not exceeding [$10,000]; and,
- produces an easily interpretable summary of its analysis for additional human review.
Our example is certainly deficient in certain respects. For instance, there is nothing in that text forcing the secretary of commerce to make such a determination (though such provisions could be added), and a highly deregulatory administration could likely delay the date of such a determination well beyond the legislators’ intent. But we think that more carefully crafted automatability triggers could bring several benefits.
Most importantly, properly designed automatability triggers could effectively manage the risks of regulating too soon or too late. They manage the risk of regulating too early because the triggers delay regulation until AI has already advanced significantly: An AI that can cheaply automate compliance with a regulation is presumably quite advanced. Automatability triggers manage the risk of regulating too late for a similar reason: AI systems that are not advanced enough yet to automate compliance likely pose less risk than those that are, at least for risks correlated with general-purpose capabilities.
There’s also the benefit of ensuring that the regulation does not impose disproportionately high costs on any one actor, thereby preventing regulation from forming an unintentional moat for larger firms. Our model trigger, for example, specifies that the regulation is effective only when the compliance determination from a compliance-automating AI costs no more than $10,000. Critically, these triggers may also be crafted in a way that facilitates iterative policymaking grounded in empirical evidence as to AI’s risks and benefits. This last benefit distinguishes automatability triggers from monetary or computing thresholds that are less sensitive to the risk profile of the models in question.
Automated Compliance as Evidence of Compliance
An automatability trigger specifies that a regulation becomes effective only when an AI system exists that is capable of accurately and cheaply automating compliance. If such a “compliance-automating AI” system exists, we might also decide to treat firms that properly implement such a compliance-automating AI more favorably than firms that don’t. For example, regulators might treat implementation of compliance-automating AI systems as rebuttable evidence of substantive compliance. Or such firms might become subject to less frequent or stringent inspections.
Accelerate to Regulate
AI progress is not one-dimensional. We have identified compliance automation as an attractive dimension of AI progress: It reduces the cost to achieve a fixed amount of regulatory risk reduction (or, equivalently, it increases the amount of regulatory risk reduction feasible with a fixed compliance budget), thereby loosening one of the most consequential constraints on good policymaking in this high-consequence domain.
It may therefore be desirable to adopt policies and projects that accelerate the development of compliance automating AI. Policymakers, philanthropists, and civic technologists may be able to accelerate automated compliance by, for example:
- Building curated data sets that would be useful for creating compliance-automating AI systems.
- Building proof-of-concept compliance-automating AI systems for existing regulatory regimes.
- Instituting monetary incentives, such as advance market commitments, for compliance-automating AI applications.
- Ensuring that firms working on automated compliance have early access to restricted AI technologies.
- Preferentially developing and advocating for AI policy proposals that are likely to be more automatable.
Automated Governance Amplifies Automated Compliance
Our paper focuses primarily on how private firms will soon be able to use AI systems to automate compliance with regulatory requirements to which they are subject. However, this is only one side of the dynamic: Governments will also be able to automate many of their core bureaucratic, administrative, and regulatory functions. To be sure, automation of core government functions must be undertaken carefully; one of us has recently dedicated a lengthy article to the subject. But the need for caution here should not be a justification for inaction or indolence. Governmental adoption of AI is becoming increasingly indispensable to state capacity in the 21st century. We are, therefore, also excited about the likely synergies between automated compliance and automated governance. As each side of the regulatory tango adopts AI, new possibilities for more efficient and rapid interaction will open. Scholarship has only begun to scratch the surface of what this could look like and the benefits and risks it will entail.
Conclusion: A Positive-Sum Vision for AI Policy
Spirited debates about the optimal content, timing, and enforcement of AI regulation will persist for the foreseeable future. That is all beneficial.
At the same time, new technologies are typically positive-sum, enabling the same tasks to be completed more efficiently than before. Those of us who favor some eventual AI regulation should internalize this dynamic into our own policy thinking by considering carefully how AI progress will enable new modes of regulation that simultaneously increase regulatory effectiveness and reduce costs to regulated parties. This methodological lens is already common in technical AI safety, where many of the most promising proposals assume that future, more capable AI systems will be indispensable in aligning and securing other AI systems. In many cases, AI policy should rest on a similar assumption: AI technologies will be indispensable in regulatory formulation, administration, and compliance.
Hard questions remain. There may be AI risks that emerge well before compliance-automating AI systems can reduce costs associated with regulation. In these cases, the familiar tension between innovation and regulation will persist to a significant extent. However, in other cases, we hope that it will be possible to design policies that ride the production possibilities frontier as AI pushes it outward, achieving greater risk reduction at declining cost.
