The Inevitable Presidential AI Model
Alan Rozenshtein’s recent Lawfare article raises important, thought-provoking concerns about the ways in which artificial intelligence (AI) enables accretion of presidential power. The article identifies “five distinct mechanisms through which [AI] will actually concentrate presidential power” and argues that the “real constraint on presidential power has always been practical, not constitutional.” The last of these mechanisms is the potential that the president finally realizes the promise of the unitary executive by using AI to overcome practical constraints on his actions. As Rozenshtein points out, “[o]ne person just can’t process information from millions of employees, supervise 400 agencies, and know what subordinates are doing across the vast federal bureaucracy.”
Rozenshtein conceives of “TrumpGPT”: an AI model trained on a corpus of executive material, which acts as a sort of virtual presidential opinion-generator. The model encodes the president’s preferences and opinions, and it is designed to be deployed broadly throughout the executive branch. If a GS-11 employee in the Department of the Treasury wants to know whether her email complies with presidential guidance, she need only consult TrumpGPT, and—voila!—the president’s preferences shape her email. Such a model enables decisions to be made from a singular perspective, the president’s, and reaches every crevice of the executive branch.
A unitary executive, to be sure.
But, as I discuss in a forthcoming paper, the opportunity for greater presidential power—and, perhaps, the real risk of an imperial AI presidency—comes not from pushing the president’s views down through many iterations of a model that replicates the president’s thinking. Rather it comes from pushing information up to a single, executive model—the Presidential Model—that serves as a decision-making tool for the most powerful office in the world. Rozenshtein is right that AI offers the prospect of concentrated presidential power, but the truly transformative application of AI is not TrumpGPT. It is the use of AI as a centralized tool within the Oval Office itself.
Such a concept is inevitable and necessarily unique, channeling the president’s singular authority under Article II and sitting above most legal and policy constraints that currently regulate AI in the government. Given these attributes, it is worth considering now how best to construct such a model and what, if anything, constrains it.
The Inevitability of a Presidential Model
What if John F. Kennedy, after receiving input from the ExComm during the Cuban missile crisis, had one more source of input available: a bespoke superintelligent agent on which he relied to assess courses of action? As helpful as ChatGPT may be for everyday functions, there are a number of reasons (data leaks, malware, etc.) why the president cannot simply turn to ChatGPT (or other commercial AI tools) for strategic advice. What if the president had the ability to consult a large language model (LLM) not unlike the models many of us use each day—ChatGPT, Gemini, and Claude, to name but a few. At first blush, the notion may seem absurd: The president would hardly place such crucial decisions about U.S. national security—or, as members of the ExComm saw it, the future of humanity—in the hands of a machine. Right?
This absurdity fades, however, when one considers current uses of AI. People use AI for the mundane: to identify a “pet-friendly Italian restaurant nearby with outdoor seating” or to name a pet turtle. They use it in weighty, life-or-death situations from early warning stroke detection to aircraft-collision avoidance. And they use AI in national security. Even before ChatGPT “wow[ed] the world,” leaders in the United States recognized that AI was “a force multiplier” in national security, “one that helps[] to make decisions faster and more rigorously, to integrate across all domains, and to replace old ways of doing business.”
Two emerging domains of AI use are particularly telling. The first is AI use among CEOs. Executives in the private sector increasingly rely on AI and incorporate the technology into their executive functions. Multiple CEOs use AI as a “thought partner,” enabling them to identify risks or opportunities, pressure-test strategy, and evaluate decisions from various angles. The technology enables CEOs to accelerate their consumption and processing of information and shapes their strategic thinking. There is a recognition among C-suites that AI has value as a high-level, decision-making tool in business, and some predict this type of use will be the norm among executives by 2030 as executives use “AI-generated insights to model scenarios, assess risks and guide decisions.” Forbes explained in 2022 how AI “optimizes executive-level decision-making in mega-corporations” by augmenting human analysis: “Using prescriptive or predictive analytics, the system suggests a decision, or a set of decision options, to the human observer. Its advantages come from the combination of human expertise and AI’s capacity to quickly evaluate large amounts of data and deal with complexity.” The same applies equally to the president, who arguably deals with more data and more complexity in the decisions he makes every day than C-suite executives in mega-corporations.
The second is AI use in the executive branch itself. A series of executive orders and memoranda under the first Trump administration, the Biden administration, and the current Trump administration stress the need for federal agencies to implement American-developed AI and encourage its use among government employees. President Trump’s April memorandum, “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust,” directs agencies to “adopt a forward-leaning and pro-innovation approach that takes advantage of [AI] to help shape the future of government operations” and generally “accelerate the Federal use of AI[.]” This directive, combined with the rapid development of AI, indicates that the use of AI within the federal workforce will only grow. Within the national security structure, the recent release of GenAI.mil and push to use and integrate AI into workflows will hasten the adoption of AI in the military, where it has already transformed wargaming. Recent news reports indicate senior Army officials have begun integrating AI into decision-making.
The increasing use of AI among private-sector CEOs and public-sector employees creates a pressure, or even an expectation, for the president to similarly leverage the technology. If the president were not to use the Presidential Model for his executive functions, AI gaps would develop between the president and CEOs and the president and his or her own subordinates. The question seems not to be if, but when, the president will use the Presidential Model.
The Framework for a Presidential Model
In the Cuban missile crisis, an anachronistic but illustrative example, AI would have surely improved the intelligence at the subsurface of the president’s decision-making, ingesting, compiling, and enhancing available intelligence, and perhaps better predicting the actions of the Soviet Union. (Although the intelligence community succeeded in discovering missile sites in Cuba before they became operational, there were certainly issues with intelligence assessments at the time. For example, we now know that the CIA vastly underestimated the number of Soviet troops in Cuba and was unaware of the U.S.S.R.’s prior movement of nuclear armed medium-range missiles to Cuba.) But AI also could have played a role at the surface, by directly providing advice to President Kennedy. That is the Presidential Model.
From a technical standpoint, there are many plausible ways to construct such a model. Yet for the purposes of this article, it suffices to conceive of a relatively pedestrian framework: a model constructed with a hub-and-spoke design, where the “presidential hub” is the user interface the president sees, and the spokes are individual agents, which sit atop classified and unclassified data streams. Developers could organize the AI agents by subject matter, mirroring, perhaps, the president’s Cabinet—an agent of State, Defense, Treasury, and so on. Or developers could organize the agents by function—an agent for briefings, legal analysis, red teaming, and engagement preparation, among others. There are surely advantages and disadvantages to both approaches.
There are a handful of reasons a “multiple agent” model appears, at this stage at least, superior to a single, general agent. The first is safety. Subdividing the agents may limit the potential damage from a single agent’s misalignment or mistakes. For example, the briefing agent may begin to hallucinate facts about an adversary. But if the model maintains data compartmentalization through separate indices for each of its agents, the other agents may never “see” the briefing agent’s hallucinated facts. Second, a system of agents seems to lay the groundwork for clear lines of oversight and auditability by establishing which agent does what. But the purpose of this article is not necessarily to hash out the optimal model from a technical standpoint; rather it aims to posit a plausible construction for the model and stimulate discussion for future development.
Would President Kennedy have consulted the Presidential Model after receiving initial recommendations from the ExComm? Would a future president consult a model in a future conflict—say, for courses of action to counter Russian aggression in Ukraine or to wargame a Chinese invasion of Taiwan? It is not hard to imagine.
A Presidential Model’s Legal Scope and Restrictions
The legal underpinnings of the Presidential Model are perhaps even more interesting than the technical framework. Suffice it to say, the president’s legal authority to leverage an LLM as a decision-making tool appears quite broad. While there are several restrictions on current AI use in the government, most arise from executive orders or agency directives. The president, legally speaking, need not comply with such orders (although there may be normative or prudential reasons for doing so). It begs the question: Are there any legal limits on the Presidential Model? There are, though the president is certainly less constrained than other government actors.
First, the law may limit access to information in ways that affect both building the model and using it. The president would not be able to use the model to monitor U.S. persons in ways that violated the Fourth Amendment, for instance. New technology, of course, does not obliterate the protections afforded under the Bill of Rights. For the same reasons, there may be restrictions on the data acquired to train the model before it ever gets into the president’s hands. Interestingly, this means the authority to build the model within the government may be more limited than the authority to build similar LLMs in the private sector, where web “crawlers” and web “scrapers” collect large swaths of data on which to train and build LLMs. As explained in Scientific American, companies are likely using personal information to train their models, and such use is largely opaque and unconstrained. In the U.S. government, where transparency and privacy are watchwords, the President Model may have more limited access to training data.
Second, the law may also govern the Presidential Model’s retention of information. Such a model will generate volumes of records—the model’s training data, the president’s inputs, and the model’s outputs. Some of those records may fall within the ambit of the Presidential Records Act (PRA), which encompasses “documentary materials, or any reasonably segregable portion thereof, created or received by the President … whose function is to advise or assist the President, in the course of conducting activities which relate to or have an effect upon the carrying out of the constitutional, statutory, or other official or ceremonial duties of the President.” Congress may further shape records retention by conditioning funding for the Presidential Model on compliance with specific criteria. For example, Congress may include a condition that no funds available for the development or operation of the Presidential Model may be obligated unless the model creates and preserves prompts, outputs, and logs sufficient to satisfy the president’s obligations under the PRA. At the same time, the president’s deliberations with AI raise issues of executive privilege.
And third, some potential constraints manifest in novel legal questions. This article assumes the president uses the Presidential Model to augment the advice he otherwise receives. But what if the president begins to defer to the AI model in ways that abdicate his responsibilities? Does Article II impose some duty on the president to maintain independent judgment? Is such a notion captured in the oath to “faithfully execute” the office? Recent research by the multinational software company SAP revealed that “44 percent of C-suite executives would override a decision they had already planned to make based on AI insights [and] [a]nother 38 percent would trust AI to make business decisions on their behalf.” Does the Constitution permit the president to do the same? Article II identifies the president as commander in chief; it does not vest somebody (or something) else with the authority and responsibility to lead our Army and Navy. The Supreme Court in McElrath v. United States recognized the notion of nondelegable presidential acts, and Office of Legal Counsel (OLC) opinions have consistently recognized the same. (OLC concluded, for instance, that only the president may decide whether to approve a bill and may not delegate inherent constitutional powers.) Or what if the president, actually or effectively, replaces members of his Cabinet with AI agents? Are those virtual agents tantamount to “Officers of the United States” under Article II? The Presidential Model runs headlong into these constitutional questions.
The president’s use of an AI model opens a veritable Pandora’s Box of legal issues. Does the president’s reliance on the advice of a “virtual advisory committee” implicate the Federal Advisory Committee Act? Is the president constrained by international law? Prudential and ethical issues lurk in the background.
* * *
There are invariably risks to the Presidential Model. Many of those exist elsewhere and are common to AI use—hallucination, misalignment, and so on. Some of those are amplified in the Presidential Model. Risks created by AI sycophancy, for instance, could create a dangerous echo chamber at the highest level of government.
Nevertheless, given AI’s growing adoption inside and outside of government, it seems the Presidential Model is an inevitability. It’s time to start planning for it.
