Criminal Justice & the Rule of Law Cybersecurity & Tech Executive Branch

Executive Branch AI and the Rule of Law: An Emerging Research Agenda

Cullen O'Keefe, Alan Z. Rozenshtein, Christoph Winter
Friday, May 1, 2026, 11:10 AM

Sooner or later, in this administration or a future one, AI will come for the federal workforce.

AI-generated image by Cullen O’Keefe.

It is by now cliché to warn that artificial intelligence (AI) seems poised to disrupt most major institutions and economic sectors in the coming decades. Gallons of virtual ink have been spilled—here at Lawfare and elsewhere—arguing that AI will require a fundamental reimagining of many policy domains, from biosecurity to cybersecurity to privacy.

Yet, as we rapidly approach America’s semiquincentennial, we fear that another looming threat has not received the attention it deserves: the risks to the rule of law from the use of frontier AI systems in the executive branch (ExecAI). Whether due to the existing limitations of the technology, bureaucratic inertia, or something else, state-of-the-art AI systems have yet to fundamentally transform the functioning of the executive branch. Governmental adoption of software is famously slow and clumsy, and will likely continue to lag private-sector adoption for some time. But this is not due to a lack of interest, nor due to a lack of effort. Sooner or later, in this administration or a future one, AI will come for the federal workforce.

We are not necessarily opposed to this. Each of us has differing opinions about to what extent an AI-enabled government is practically or theoretically desirable, but none of us think that AI should have no role in public administration. Indeed, some of us think that role should be expansive and that AI has the potential to ameliorate many of the problems of the modern administrative state. But it is essential that governmental adoption of AI safeguards the rights and liberties enshrined in America’s customs, laws, and constitution—not least the separation of powers on which those liberties depend. And we are concerned that striking the right balance between governmental adoption of AI and preservation of liberty may require more forethought than is commonly assumed—or currently underway.

AI Empowers the Executive ...

We start with the hypothesis that, absent specific countervailing policy measures, further advances in AI technology are likely to empower the executive branch at the expense of the coordinate branches of government—and therefore the structural safeguards of liberty.

One of us has already defended this view at some length, pointing to several key AI-driven trends that would tend to expand executive power:

  • Increased reliance on emergency powers due to fast-moving AI-driven security threats.
  • Increased surveillance and automated law enforcement.
  • Amplification of the bully pulpit through AI-augmented propaganda and mass persuasion.
  • Creation of a “double black box” in which already-opaque AI systems are further shielded from public scrutiny due to real and purported national security imperatives.
  • AI’s ability to serve as a “cognitive proxy” that can scalably and reliably convey the president’s policy preferences at all levels of executive decision-making.

Importantly, many of these trends are achievable with existing AI systems. Future, more capable AI systems may pose even graver problems. Others of us have written about AI companies’ latest goal of building “fully capable AI agents” that can perform any computer-based task as competently as a human expert. Such AI agents could, by definition, perform a large number of tasks currently performed by human civil servants and service members. But unlike human government employees, ExecAI agents may be perfectly obedient to their principal—in this case, the president and his principal officers. Under this vision, the role of the president is transformed from the overseer of a sprawling and often fragmented bureaucracy, staffed by humans with diverse motivations and values, to the commander of an efficient, compliant, and coordinated swarm of loyal robotic bureaucrats. Whatever the benefits that such a transformation might bring for efficiency and democratic responsiveness, it would nevertheless represent a dramatic augmentation of the president’s command and control of the levers of state power.

AI, as currently developed, is a centralizing technology. It is easier to centralize control of AI infrastructure (e.g., data centers) and intellectual property (e.g., model weights) than it is to centralize control of the largest source of wealth today: human capital, in all its diverse and diffuse glory. Institutions that can exercise effective control over these factors of production, and steer them toward their own ends, will be advantaged by the AI revolution. It is therefore natural that advances in AI will advantage the executive—the branch of government that possesses the “energy” and “unity” to make most effective use of this technology.

Congress, meanwhile, is famously “a ‘they’ not an ‘it’” (and is in any case driven by partisanship rather than unified in defense of its own constitutional prerogatives). Of course, while Congress certainly could leverage AI to better fulfill its constitutional role (on which more below), it is by design more difficult for it to decisively identify and assert its collective interests against a recalcitrant executive. And on a programmatic level, the executive branch simply has more responsibilities—and therefore more opportunities to deploy AI—than either of the coordinate branches.

... Thereby Creating Structural Risks ...

An AI-empowered executive is not, to be clear, inherently problematic. Few would claim that our human bureaucracy is a perfect model of operational efficiency. And while reasonable minds may differ on the extent to which bureaucratic friction is an appropriate check on executive action, there are certainly significant benefits to further empowering the president to execute on his policy agenda.

That said, the American system of government has always relied on internal and interbranch checks and balances. As Weber recognized (and as scholars have continued to argue today), the partial independence of the bureaucracy from political leadership is not an incidental by-product of administrative complexity but a structural feature of modern governance. Those checks and balances, in turn, were designed in a very different technological era and may therefore be implicitly premised on technological assumptions that AI will displace. Consider the following ways in which an AI-enabled executive may challenge existing checks and balances.

Less Resistance to Unlawful or Unpopular Orders

Today, human beings carry out the president’s orders. Those subordinates have incentives that differ from the president’s and may therefore resist presidential directives from time to time. They are protected from adverse employment action if they refuse to violate the law. They may also face criminal liability at the hands of a subsequent president. And of course, they carry with them their own set of political, moral, and social commitments that may cause them to resist sufficiently distasteful—if not illegal—directives. By default, there is no reason to think that ExecAI systems will face any of these incentives. Instead, there is every reason to assume that, absent legal requirements to the contrary, they will be optimized for obedience to their principals—and ultimately the president.

Less Exposure of Executive Branch Scandals

Whistleblowing and leaks to the press are key means by which Congress, the public, and internal executive branch watchdogs become aware of executive branch abuses. Innumerable executive branch scandals, including the My Lai massacre, the Pentagon Papers, the abuses at Abu Ghraib, the 2013 Snowden leaks, and Operation Fast and Furious, have come to light and catalyzed reforms and accountability solely because conscientious employees and officers spoke up in one way or another. But, by default, ExecAI systems will not whistleblow: They will necessarily be engineered to safeguard government secrets. Recognizing this, government officials may rely increasingly on AI systems to carry out their most controversial deeds.

Quicker Coordinated Action Evading Review

Two benefits of AI systems are their speed and coordination: their ability to make decisions and execute actions at a superhuman pace and scale. This, of course, has numerous benefits. But it might also erode the possibility for courts to intervene and enjoin lawless government action. Injured citizens will be left to rely on ex post remedies, which are more difficult to secure (especially against the federal government) and may be completely inadequate.

Obfuscated Attribution

Notwithstanding the recent trend of Immigration and Customs Enforcement (ICE) agents using masks to obscure their identities, it is generally possible to verify whether a flesh-and-blood human is a federal officer. On the internet, however, nobody knows you’re a dog: It may be possible for ExecAI systems to take virtual actions without revealing themselves as such. This may make it hard to attribute some harmful act (say, the hacking of a rival politician) to the responsible governmental actor—and therefore secure accountability.

... Atop a Stressed Foundation

These would be mighty challenges in the best of times. But of course, they come at a moment of significant constitutional turmoil. A full survey of our civic woes is well beyond the scope of this article. However, the following trends make us less than fully confident that the republic is currently well-equipped to handle an AI-driven expansion of executive power:

These specific recent challenges are instantiations of longer-term trends that suggest a slow but steady erosion of interbranch checks and balances. Consider, for example:

It remains to be seen whether these trends progress alongside AI. A renewed era of broader good governance reforms and congressional assertiveness would certainly make us more optimistic about the future of AI in government. But, regrettably, this does not seem imminent.

An Agenda for Rule of Law in the Age of Executive AI

In the meantime, then, what are we to do about the steady march toward algocracy? If constitutional salvation is beyond our immediate reach, are there at least more targeted reforms that would enable future presidents to use ExecAI that would safeguard the rule of law and merit the trust of even their most strident opponents—and large majorities of the American people?

We are legal scholars who have converged on these questions in our research agendas. Although we intend to continue to publish scholarship on AI and the rule of law, we think the topic is likely to remain under-resourced compared to its likely significance in the coming decades. To that end, we have begun to build a community of scholars, analysts, and practitioners interested in these questions. This article compiles some of our interim conclusions as to promising directions for further research. Some of these questions are old questions made newly urgent by advances in AI technology; others have been formulated more recently. Regardless, we hope that they will spark further research in this area and a better understanding of the key challenges and opportunities surrounding ExecAI.

Additional relevant research agendas can be found in “Law-Following AI” and Lawfare’s “Open Questions in Law and AI Safety: An Emerging Research Agenda.”

Foundational Questions

  1. What types of ExecAI systems pose the greatest threat to the rule of law? Are agentic AI systems particularly risky? If so, why? And how should “agentic AI” be defined, anyway?
  2. Which governmental functions are most vulnerable to abuse through the use of ExecAI? Beyond the obvious cases of law enforcement and use of lethal force, where could ExecAI severely undermine the rule of law if not carefully designed? For example, how can we differentiate between uses of ExecAI that pose serious risks and those that are merely experimental?
  3. When are the trade-offs between congressional oversight and regulation on the one hand, and executive branch capacity on the other hand, the harshest? When are such trade-offs less harsh than they may seem at first blush?

Empowering the People

  1. Is it desirable to create a federal cause of action for money damages when the federal government violates citizens’ rights using ExecAI?
  2. Do citizens have sufficient tools to discover whether ExecAI has been used to violate their rights?
  3. Are changes to immunity rules or indemnification practices needed to protect citizens from misuse of ExecAI?
  4. Can legal AI tools help citizens more effectively vindicate their rights?
  5. How do we prevent or mitigate an asymmetry in the quality and quantity of AI tools available to the government versus individuals? How do we navigate trade-offs between preventing misuse of dual-use AI systems and concentrating power in the government?
  6. Does ExecAI require a fundamental rethinking of prosecutorial discretion and doctrines regarding selective and vindictive prosecution?
  7. Are reforms to Section 1983 needed to protect citizens from state misuse of AI?

Augmenting the Legislature

  1. How can ExecAI be designed so as to facilitate effective congressional oversight? For example, could ExecAI be designed to automatically whistleblow to Congress in appropriate cases? Is it possible to design ExecAI so that it provides automatic, real-time updates to designated committees? Or could ExecAI be designed to automatically comply with congressional subpoenas? Less ambitiously, is it possible to design AI systems that would audit ExecAI without revealing sensitive information?
  2. Could AI help Congress rapidly respond to emerging abuses of ExecAI? For example, could AI tools help Congress rapidly negotiate and draft new legislation that commanded bipartisan support? Could AI drafting tools create more precise rules for the allowed and proscribed uses of ExecAI?
  3. How can Congress stay informed about ExecAI capabilities and use cases, especially in sensitive domains like national security?
  4. Can AI systems help members of Congress provide better constituent services? Can they enable quicker resolution of governmental failures brought to representatives’ attention?

Augmenting the Judiciary

  1. Can ExecAI be designed so that it will automatically comply with court orders?
  2. How can the judiciary be confident that it has accurate information about how ExecAI is being used?
  3. If AI is a centralizing technology that inherently advantages the executive, maintaining constitutional equilibrium may require differential acceleration of AI capabilities in the coordinate branches—especially the judiciary. What forms of AI tools for the judiciary could help close the capability gap with an AI-enabled executive without compromising other judicial values?
  4. How can courts preserve the effectiveness of preliminary and injunctive relief when ExecAI enables government action at a speed and scale that outpaces existing judicial processes?
  5. If AI can equip courts and Congress with expertise comparable to that of the executive branch, does the traditional justification for deference to the executive—already weakened under Loper Bright—continue to hold?

Internal Checks and Balances

  1. Are reforms to the Office of Legal Counsel—or the effects of its opinions—necessary to prevent the use of biased legal opinions to greenlight harmful uses of ExecAI? What reforms are constitutionally permissible?
  2. Can AI tools empower executive branch whistleblowers, such as by providing an anonymous and untraceable means of providing information to inspectors general, Congress, and other governmental watchdogs?
  3. Are key oversight bodies like the Government Accountability Office and inspectors general well equipped to oversee uses of ExecAI? What additional authorities or institutional competencies do they need to perform their functions well in the age of AI?
  4. Can these oversight bodies be structured so that they simultaneously have sufficient insight into the executive branch while retaining sufficient independence from the president, especially as the Supreme Court embraces a unitarian vision of the executive branch?
  5. When is bureaucratic friction from human decision-making essential to the preservation of liberty? Can ExecAI improve executive functions without removing constructive forms of friction? What would a thoughtful balance of efficiency and friction look like?
  6. When are human-in-the-loop and similar requirements a valuable means of ensuring individual accountability for AI actions? When such requirements are inadequate, how can they be augmented?

AI and National Security

  1. How should Congress regulate procurement of military AI systems?
  2. What special rules should govern the domestic use of military AI, or the use of military AI against Americans or on domestic soil? How would such rules be designed and enforced?
  3. To what extent is it desirable to hard-code legal or ethical constraints in military AI systems that cannot be easily overridden by commanders? How can we be confident that such constraints are not exploitable by an adversary?
  4. How should AI be integrated into intelligence-gathering and analysis functions? What reforms would simultaneously enhance national security and safeguard civil liberties if the cost of automated intelligence collection and analysis approaches zero?

Procurement of ExecAI

  1. What are the outer bounds of Congress’s ability to limit the ways in which ExecAI functions through their power to condition procurement?
  2. Does the government have adequate means of ensuring that ExecAI works as promised? For example, how can Congress, the executive branch, and the American people be confident that any particular ExecAI has been procured in accordance with law and satisfies all design requirements imposed by law? How, for example, can they be confident that there are no “backdoors” that would cause the AI to behave lawlessly—or contrary to American interests—under certain conditions?
  3. How can Congress ensure that ExecAI tools are not used for functions for which they were not authorized?
  4. Should Congress take a very granular approach to approving procurements of ExecAI? For example, should the default rule be that Congress must approve individual ExecAI systems for deployment in certain functions? How could Congress make informed assessments about the safety of proposed ExecAI systems?

We do not know how quickly or dramatically AI will transform the executive branch. But the structural risks outlined above do not require speculative leaps about future technology. Many could be realized with AI systems that already exist or are under active development. The time to build the legal and institutional frameworks for governing ExecAI is before these systems are deployed at scale, not after. We hope that the questions posed here will encourage scholars, practitioners, and policymakers to take up this work—and that the answers will come soon enough to matter.


Cullen O'Keefe is the Director of Research at the Institute for Law & AI (LawAI) and a Research Affiliate at the Centre for the Governance of AI. Cullen's research focuses on legal and policy issues arising from general-purpose AI systems, with a focus on risks to public safety, global security, and rule of law. Prior to joining LawAI, he worked in various policy and legal roles at OpenAI over 4.5 years.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Christoph Winter is an Assistant Professor of Law and AI at the University of Cambridge and the Director of the Institute for Law & AI.
}

Subscribe to Lawfare