The Unitary Artificial Executive
Published by The Lawfare Institute
in Cooperation With
Editor’s note: The following are remarks delivered on October 23, 2025, at the University of Toledo Law School’s Stranahan National Issues Forum. Watch a recording of the address here.
Good afternoon. I'd like to thank Toledo Law School and the Stranahan National Issues Forum for the invitation to speak with you today. It's an honor to be part of this series.
In 1973, the historian Arthur Schlesinger Jr., who served as a senior adviser in the Kennedy White House, gave us "The Imperial Presidency," documenting the systematic expansion of unilateral presidential power that began with Washington and that Schlesinger was chronicling in the shadow of Nixon and Watergate. Each administration since then, Democrat and Republican alike, has argued for expansive executive authorities. Ford. Carter. Reagan. Bush 1. Clinton. Bush 2. Obama. The first Trump administration. Biden. And what we're watching now in the second Trump administration is breathtaking.
This pattern of ever-expanding executive power has always been driven partly by technology. Indeed, through human history, transformative technologies drove large-scale state evolution. Agriculture made populations large enough for taxation and conscription. Writing enabled bureaucratic empires across time and distance. The telegraph and the railroad annihilated space, centralizing control over vast territories. And computing made the modern administrative state logistically possible.
For American presidents specifically, this technological progression has been decisive. Lincoln was the first "wired president," using the telegraph to centralize military command during the Civil War. FDR, JFK, and Reagan all used radio and then television to "go public" and speak directly to the masses. Trump is the undisputed master of social media.
I've come here today to tell you: We haven't seen anything yet.
Previous expansions of presidential power were still constrained by human limitations. But artificial intelligence, or AI, eliminates those constraints—producing not incremental growth but structural transformation of the presidency. In this lecture I want to examine five mechanisms through which AI will concentrate unprecedented authority in the White House, turning Schlesinger's "Imperial Presidency" into what I call the "Unitary Artificial Executive."
The first mechanism is the expansion of emergency powers. AI crises—things like autonomous weapons attacks or AI-enabled cybersecurity breaches—justify broad presidential action, exploiting the same judicial deference to executive authority in emergencies that courts have shown from the Civil War through 9/11 to the present.
Second, AI enables perfect enforcement through automated surveillance and enforcement mechanisms, eliminating the need for the prosecutorial discretion that has always limited executive power.
The third mechanism is information dominance. AI-powered messaging can saturate the public sphere through automated propaganda and micro-targeted persuasion, overwhelming the marketplace of ideas.
Fourth, AI in national security creates what scholars call the "double black box"—inscrutable AI nested inside national security secrecy. And when these inscrutable systems operate at machine speed, oversight becomes impossible. Cyber operations and autonomous weapons engagements complete in milliseconds—too fast and too opaque for meaningful oversight.
And fifth—and most dramatically—AI can finally realize the vision of the unitary executive. By that I mean something specific: not just a presidency with broad substantive authorities, but one that exerts complete, centralized control over executive branch decision-making. AI can serve as a cognitive proxy throughout the executive branch, injecting presidential preferences directly into algorithmic decisions, making unitary control technologically feasible for the first time.
These five mechanisms operate in two different ways. The first four expand the practical scope of presidential authority—emergency powers, enforcement, information control, and national security operations. They expand what presidents can do. The fifth mechanism is different. It's about control. It determines how those powers are exercised. And the combination of these two creates an unprecedented concentration of power.
My argument is forward-looking, but it's not speculative. From a legal perspective, these mechanisms build on existing presidential powers and fit comfortably within current constitutional doctrine. From a technological perspective, none of this requires artificial superintelligence or even artificial general intelligence. All of these capabilities are doable with today's tools, and certainly achievable within the next few years.
Now, before we go further, let me tell you where I'm coming from. My academic career has focused on two research areas: first, the regulation of emerging technology, and, second, executive power. Up until now, these have been largely separate. This lecture brings those two tracks together.
But I also have some practical experience that's relevant to this project. Before becoming a law professor, I was a junior policy attorney in the National Security Division at the Department of Justice. In other words, I was a card-carrying member of what the current administration calls the "deep state."
One thing I learned is that the federal bureaucracy is very hard to govern. Decision-making is decentralized, information is siloed, civil servants have enormous autonomy—not so much because of their formal authority but because governing millions of employees is, from a practical perspective, impossible. That practical ungovernability is about to become governable.
Together with Nicholas Bednar, my colleague at the University of Minnesota Law School, I've been researching how this transformation might happen—and what it means for constitutional governance. This lecture is the first draft of the research we've been conducting.
So let's jump in. To understand how the five mechanisms of expanded presidential power will operate—and why they're not speculative—we need to start with AI's actual capabilities. So what can AI actually do today, and what will it be able to do in the near future?
What Can AI Actually Do?
Again, I'm not talking about artificial general intelligence or superintelligence—those remain speculative, possibly decades away. I'm talking about today's capabilities, including technology that is right now deployed in government systems.
It's helpful to think of AI as a pipeline with three stages: collection, analysis, and execution.
The first stage is data collection at scale. The best AI-powered facial recognition achieves over 99.9 percent accuracy and Clearview AI—used by federal and state law enforcement—has over 60 billion images. The Department of Defense's Project Maven—an AI-powered video analysis program—demonstrates the impact: 20 people using AI now replicate what required 2,000. That's a 100-fold increase in efficiency.
The second stage is data analysis. AI analyzes data at scales humans cannot match. FINRA—the financial industry self-regulator—processes 600 billion transactions daily using algorithmic surveillance, a volume that would require an army of analysts. FBI algorithms assess thousands of tip line calls a day for threat level and credibility. Systems like those from the technology company Palantir integrate databases across dozens of agencies in real time. All this analysis happens continuously, comprehensively, and faster than human oversight.
The third stage is automated execution, which operates at speeds and scales outstripping human capabilities. For example, DARPA's AI-controlled F-16 has successfully engaged human pilots in mock dogfights, demonstrating autonomous combat capability. And the federal cybersecurity agency's autonomous systems block more than a billion suspicious network connection requests across the federal government every year.
To summarize: AI can sense everything, process everything, and act on everything—all at digital speed and scale.
These are today's capabilities—not speculation about future AI. But they're also just the baseline. And they're scaling up dramatically—driven by two forces.
The first driver is the internal trajectory of AI itself. Training compute—the processing power used to build AI systems—has increased four to five times per year since 2010. Epoch AI, a research organization tracking AI progress, projects that frontier AI models will use thousands of times more compute than OpenAI's GPT-4 by 2030, with training clusters costing over $100 billion.
What will this enable? By 2030 at the latest, AI should be capable of building large-scale software projects, producing advanced mathematical proofs, and engaging in multi-week autonomous research. In government, that means AI systems that don't just analyze but execute complete, large-scale tasks from start to finish.
The second driver of AI advancement is geopolitical competition. China's 2017 AI Development Plan targets global leadership by 2030, backed by massive state investment. They've deployed generative AI news anchors and built the nationwide Skynet video surveillance system—and yes, they actually called it that. China's technical capabilities are advancing rapidly—the DeepSeek breakthrough earlier this year demonstrated that Chinese researchers can match or exceed Western AI performance, often at a fraction of the cost.
In today's polarized Washington, there's only one thing Democrats and Republicans agree on: China is a threat that must be confronted. That consensus is driving much of AI policy. So it's unsurprising that the administration's recent AI Action Plan frames the U.S. response as seeking "unquestioned ... technological dominance." Federal generative AI use cases have increased ninefold in one year, and the Defense Department awarded $800 million in AI contracts this past July. The department has also established detailed procedures for developing autonomous lethal weapons, reflecting the Pentagon's assumption that such systems are the future.
It's easy to see how this competitive dynamic could be used to justify concentrating AI in the executive branch. "We can't afford congressional delays. Transparency would give adversaries advantages. Traditional deliberation is incompatible with the speed of AI development." The AI arms race could easily become a permanent emergency justifying rapid deployment.
Five Mechanisms Through Which AI Concentrates Presidential Power
So those are the drivers of AI progress—rapidly advancing capabilities and geopolitical pressure. Now let's examine the five distinct mechanisms through which these forces will actually concentrate presidential power.
Mechanism 1: Emergency Powers
Presidential emergency powers rest on two sources with deep historical roots. The first is inherent presidential authority under Article II. For example, during the Civil War, Lincoln blockaded Southern ports, increased the army, and spent unauthorized funds, all claiming inherent constitutional authority as commander in chief.
The second source of emergency powers are explicit congressional delegations. When FDR closed every bank in March 1933, he did so under the Trading with the Enemy Act. After 9/11, Congress passed an Authorization for Use of Military Force—still in effect two decades later and the source of ongoing military operations across multiple continents. Today the presidency operates under more than 40 continuing national emergencies. For example, Trump has invoked the International Emergency Economic Powers Act (IEEPA) to impose many of his ongoing tariffs, declaring trade imbalances a national security emergency.
With both sources, courts usually defer. From the Prize Cases upholding Lincoln's Southern blockade through Korematsu affirming Japanese internment to Trump v. Hawaii permitting the first Trump administration's Muslim travel bans, the Supreme Court has generally granted presidents extraordinary latitude during emergencies. There are of course exceptions—Youngstown and the post-9/11 cases like Hamdi and Boumediene being the most famous—but the pattern is clear: When the president invokes national security or emergency powers, judicial review is limited.
So what has constrained emergency powers? The emergencies themselves. Throughout history, emergencies were rare and time limited—the Civil War, the Great Depression, Pearl Harbor, 9/11. Wars ended, and crises receded. Our separation-of-powers framework has worked because it assumes emergencies have generally been the temporary exception, not the norm.
AI breaks this assumption.
AI empowers adversaries asymmetrically—giving offensive capabilities that outpace defensive responses. Foreign actors can use AI to identify vulnerabilities, automate attacks, and target critical infrastructure at previously impossible scale and speed. The same AI capabilities that strengthen the president also strengthen our adversaries, creating a perpetual heightened threat that justifies permanent emergency powers.
Here's what an AI-enabled emergency might look like. A foreign adversary uses AI to target U.S. critical infrastructure—things like the power grid, financial systems, or water treatment. Within hours, the president invokes IEEPA, the Defense Production Act, and inherent Article II authority. AI surveillance monitors all network traffic. Algorithmic screening begins for financial transactions. And compliance monitoring extends across critical infrastructure.
The immediate crisis might pass in 48 hours, but the emergency infrastructure never gets dismantled. Surveillance remains operational, and each emergency builds infrastructure for the next one.
Why does our constitutional system permit this? First, speed: Presidential action completes before Congress can react. Second, secrecy: Classification shields details from Congress, courts, and the public. Third, judicial deference: Courts defer almost automatically when "national security" and "emergency" appear in the same sentence. And, as if to add insult to injury, the president's own AI systems might soon be the ones assessing threats and determining what counts as an emergency.
Mechanism 2: Perfect Enforcement
Emergency powers are—theoretically, at least—episodic. But enforcement of the laws happens continuously, every day, in every interaction between citizen and state. That's where the second mechanism—perfect enforcement—operates.
Pre-AI governance depends on enforcement discretion. We have thousands of criminal statutes and millions of regulations, and so, inevitably, prosecutors have to choose cases, agencies have to prioritize violations, and police have to exercise judgment. The Supreme Court has recognized this necessity: In cases like Heckler v. Chaney, Batchelder, and Wayte, the Court held that non-enforcement decisions are presumptively unreviewable because agencies must allocate scarce resources. This discretion prevents tyranny by allowing mercy, context, and human judgment.
AI eliminates that necessity. When every violation can be detected and every rule can be enforced, enforcement discretion becomes a choice rather than a practical constraint. The question becomes: What happens when the Take Care Clause meets perfect enforcement? Does the Take Care Clause allow the president to enforce the laws to the hilt? Might it require him to?
As an example, consider what perfect immigration enforcement might look like. (And you can imagine this across every enforcement domain: tax compliance, environmental violations, workplace safety—even traffic laws.) Already facial recognition databases cover tens of millions of Americans, real-time camera networks monitor movement, financial systems track transactions, social media analysis identifies patterns, and automated risk assessment scores individuals. Again, China is leading the way—its "social credit" system demonstrates what's possible when these technologies are integrated.
Now imagine the president directs DHS to do the same: build a single AI system that identifies every visa overstay and automatically generates enforcement actions. There are no more "enforcement priorities"—the algorithm flags everyone, and ICE officers blindly execute its millions of directives with perfect consistency.
Why does the Constitution allow this? The Take Care Clause traditionally required discretion because resource limits made total enforcement impossible. But AI changes this. Now the Take Care Clause can be read as consistent with eliminating discretion—the president isn't violating his duty by enforcing everything, he’s just being thorough.
More aggressively: The president might argue that perfect enforcement is not just permitted but required. Congress wrote these laws, and the president is merely faithfully executing what Congress commanded now that technology makes it possible. If there's no resource constraint, there's no justification for discretion.
What about Equal Protection or Due Process? The Constitution might actually favor algorithmic enforcement. Equal Protection could be satisfied by perfect consistency if algorithmic enforcement treats identical violations identically, eliminating the arbitrary disparities that plague human judgment. And Due Process might be satisfied if AI proves more accurate than humans, which it may well be. Power once dispersed among millions of fallible officials becomes concentrated in algorithmic policy that could, compared to the human alternative, be more consistent, more accurate, and more just.
There's one final effect that perfect enforcement produces: It ratchets up punishment beyond congressional intent. Congress wrote laws assuming enforcement discretion would moderate impact. They set harsh penalties knowing prosecutors would focus on serious cases and agencies would prioritize egregious violations, while minor infractions would largely be ignored.
But AI removes that backdrop. When every violation is enforced—even trivial ones Congress never expected would be prosecuted—the net effect is dramatically higher punitiveness. Congress calibrated the system assuming discretion would filter out minor cases. AI enforces everything, producing an aggregate severity Congress never intended.
Mechanism 3: Information Dominance
The first two mechanisms concentrating presidential power—emergency powers and perfect enforcement—expand what the president can do. The third mechanism is about controlling what citizens know. AI enables the president to saturate public discourse at unprecedented scale. And if the executive controls what citizens see, hear, and believe, how can Congress, courts, or the public effectively resist?
The Supreme Court has held that the First Amendment doesn't restrict the government's own speech. This government speech doctrine means that the government can select monuments, choose license plate messages, communicate preferred policies—all with no constitutional limit on volume, persistence, or sophistication.
Until now, practical constraints limited the scale of this speech—more messages required more people, more time, and more resources. AI eliminates these constraints, enabling content generation at near-zero marginal cost, operating across all platforms simultaneously, and delivering personalized messages to every citizen. The government speech doctrine never contemplated AI-powered saturation, and there is no limiting principle in existing case law.
Again, look to China for the future—it's already using AI to saturate public discourse. In August, leaked documents revealed that GoLaxy, a Chinese AI company, built a "Smart Propaganda System"—AI that monitors millions of posts daily and generates personalized counter-messaging in real time, producing content that "feels authentic ... and avoids detection." The Chinese government has used it to suppress Hong Kong protest movements and influence Taiwanese elections.
Now imagine an American president deploying these capabilities domestically.
It's 2027. A major presidential scandal breaks—Congress investigates, courts rule executive actions unconstitutional, and in response the Presidential AI Response System activates. It floods social media platforms, news aggregators, and recommendation algorithms with government-generated content.
You're a suburban Ohio parent worried about safety, and your phone shows AI-generated content about how the congressional investigation threatens law enforcement funding, accompanied by fake "local crime statistics." Your neighbor, a student at the excellent local law school, is concerned about civil liberties—she sees completely different content about "partisan witch hunts" undermining due process. Same scandal, different narratives—the public can't even agree on basic facts.
The AI system operates in three layers. First, it generates personalized messaging, detecting which demographics are persuadable and which narratives are gaining traction, A/B testing and adjusting counter-messages in real time. Second, it manipulates platform algorithms, persuading social media companies to down-rank "disinformation"—which means congressional hearings never surface in your feed and news about court decisions get buried. Third, it saturates public discourse through sheer volume, generating millions of messages across all platforms that drown out opposition not through censorship but through scale that private speakers can't match.
And all the while the First Amendment offers no constraint because the government speech doctrine allows the government to say whatever it wants, as much as it wants.
Information dominance makes resistance to the other mechanisms impossible. How do you organize opposition to emergency powers if you never hear about them? How do you resist perfect enforcement if you've been convinced it's necessary? And how do you check national security decisions if you're convinced of the threat—and if you can't understand how the AI made the decision in the first place?
Which brings us to the fourth mechanism.
Mechanism 4: The National Security Black Box
National security is where presidential power reaches its apex. The Constitution grants the president enormous authority as commander in chief, with control over intelligence and classification, and courts have historically granted extreme judicial deference. Courts defer to military decisions, and the "political question" doctrine bars review of many national security judgments.
Congress retains constitutional checks—the power to declare war, appropriate funds, demand intelligence briefings, and conduct investigations. But AI creates what University of Virginia law professor Ashley Deeks calls the "double black box"—a problem that renders these checks ineffective.
The first—inner—box is AI's opacity. AI systems are inscrutable black boxes that even their designers can't fully explain. Congressional staffers lack technical expertise to evaluate them, and courts have no framework for passing judgment on algorithmic military judgments. No one—not even the executive branch officials nominally in charge—can explain why the AI reached a particular decision.
The second—outer—box is traditional national security secrecy. Classification shields operational details and the state secrets privilege blocks judicial review. The executive controls intelligence access, meaning Congress depends on the executive for the very information needed for oversight.
These layers combine: Congress can't oversee what it can't see or understand. Courts can't review what they can't access or evaluate. The public can't hold anyone accountable for what's invisible and incomprehensible.
And then speed makes things worse. AI operations complete in minutes, if not seconds, creating fait accompli before oversight can engage. By the time Congress learns what happened through classified briefings, facts on the ground have changed. Even if Congress could overcome both layers of inscrutability, it would be too late to restrain executive action.
Consider what this could look like in practice. It's 3:47 a.m., and a foreign military AI probes U.S. critical infrastructure: This time it's the industrial-control systems that control the eastern seaboard's electrical grid.
Just 30 milliseconds later, U.S. Cyber Command's AI detects the intrusion and assesses a 99.7 percent probability that this is reconnaissance for a future attack.
Less than a second later, the AI decision tree executes. It evaluates options—monitoring is insufficient, counter-probing is inadequate, blocking would only be temporary—and selects a counterattack targeting foreign military command and control. The system accesses authorization from pre-delegated protocols and deploys malware.
Three minutes after the initial probe, the U.S. AI has disrupted foreign military networks, taking air defense offline, compromising communications, and destabilizing the attackers' own power grids.
At 3:51 a.m., a Cyber Command officer is notified of the completed operation. At 7:30a.m., the president receives a briefing over breakfast of a serious military operation that she—supposedly the commander in chief—had no role in. But she's still better off than congressional leadership, which only learns about the operation later that day when CNN breaks the story.
This won't be an isolated incident. Each AI operation completes before oversight is possible, establishing precedent for the next. By the time Congress or courts respond, strategic facts have changed. The constitutional separation of war powers requires transparency time—both of which AI operations eliminate.
Mechanism 5: Realizing the Unitary Executive
The first four mechanisms—emergency powers, perfect enforcement, information dominance, and inscrutable national security decisions—expand the scope of presidential power. Each extends presidential reach.
But the fifth mechanism is different. It's not about doing more but about controlling how it gets done. After all, how is a single president supposed to control a bureaucracy of nearly 3 million employees making untold decisions every day? The unitary executive theory has been debated for over two centuries and has recently become the dominant constitutional position at the Supreme Court. But in all this time it's always been, practically speaking, impossible. AI removes that practical constraint.
Article II, Section 1, states that "The executive Power shall be vested in a President." THE executive power. A President. Singular. This is the textual foundation for the unitary executive theory: the idea that all executive authority flows through one person and that this one person must therefore control all executive authority.
The main battleground for this theory has been unilateral presidential firing authority. If the president can fire subordinates at will, control follows. The First Congress debated this in 1789, when James Madison proposed that department secretaries be removable by the president alone. Congress's decision at the time implied that the president had such a power, but we've been fighting about presidential control ever since.
The Supreme Court has zigzagged on this issue, from Myers in 1926 affirming presidential removal power, to Humphrey's Executor less than a decade later carving out huge exceptions for independent agencies, to Morrison v. Olson in 1988, where Justice Antonin Scalia's lone dissent defended the unitary executive. But by Seila Law v. CFPB in 2020, Scalia's dissent had become the majority view. Unitary executive theory is now ascendant. (And we'll see how far the Court pushes it when it decides on Federal Reserve Board independence later this term.)
But in a practical sense, the constitutional questions have always been second-order. Even if the president had constitutional authority for unitary control, practical reality made it impossible. Harry Truman famously quipped about Eisenhower upon his election in 1952: "He'll sit here [in the Oval Office] and he'll say, 'Do this! Do that!' And nothing will happen. Poor Ike—it won't be a bit like the Army. He'll find it very frustrating."
One person just can't process information from millions of employees, supervise 400 agencies, and know what subordinates are doing across the vast federal bureaucracy. Career civil servants can slow-roll directives, misinterpret guidance, quietly resist—or simply just not know what the president wants them to do. The real constraint on presidential power has always been practical, not constitutional.
But AI removes those constraints. It transforms the unitary executive theory from a constitutional dream into an operational reality.
Here's a concrete example—real, not hypothetical. In January, the Trump administration sent a "Fork in the Road" email to federal employees: return to office, accept downsizing, pledge loyalty, or take deferred resignation. DOGE—the Department of Government Efficiency—deployed Meta's Llama 2 AI model to review and classify responses. In a subsequent email, DOGE asked employees to describe weekly accomplishments and used AI to assess whether work was mission critical. If AI can determine mission-criticality, it can assess tone, sentiment, loyalty, or dissent.
DOGE analyzed responses to one email, but the same technology works for all emails, every text message, every memo, and every Slack conversation. Federal email systems are centrally managed, workplace platforms are deployed government-wide, and because Llama is open source, Meta can't refuse to have its systems used in this way. And because federal employees have limited privacy expectations in their work communications, the Fourth Amendment permits most government surveillance.
Monitoring is just the beginning. The real transformation comes from training AI on presidential preferences. The training data is everywhere: campaign speeches, policy statements, social media, executive orders, signing statements, tweets, all continuously updated. The result is an algorithmic representation of the president's priorities. Call it TrumpGPT.
Deploy that model throughout the executive branch and you can route every memo through the AI for alignment checks, screen every agenda for presidential priorities, and evaluate every recommendation against predicted preferences. The president's desires become embedded in the workflow itself.
But it goes further. AI can generate presidential opinions on issues the president never considered. Traditionally, even the wonkiest of presidents have had enough cognitive bandwidth for only 20, maybe 30 marquee issues—immigration, defense, the economy. Everything else gets delegated to bureaucratic middle management.
But AI changes this. The president can now have an "opinion" on everything. EPA rule on wetlands permits? The AI cross-references it with energy policy. USDA guidance on organic labeling? Check against agricultural priorities. FCC decision on rural broadband? Align with public statements on infrastructure. The president need not have personally considered these issues; it's enough that the AI learned the president's preferences and applies them. And if you're worried about preference drift, just keep the model accurate through a feedback loop, periodically sampling a few decisions and validating them with the president.
And here's why this matters: Once the president achieves AI-enabled control over the executive branch, all the other mechanisms become far more powerful. When emergency powers are invoked, the president can now deploy that authority systematically across every agency simultaneously through AI systems. Perfect enforcement becomes truly universal when presidential priorities are embedded algorithmically throughout government. Information dominance operates at massive scale when all executive branch communications are coordinated through shared AI frameworks. And inscrutable national security decisions multiply when every agency can act at machine speed under algorithmic control. Each mechanism reinforces the others.
Now, this might all sound like dystopian science fiction. But here's what's particularly disturbing: This AI-enabled control actually fulfills the Supreme Court's vision of the unitary executive theory. It's the natural synthesis of a 21st-century technology meeting this Court's interpretation of an 18th-century document. Let me show you what I mean by taking the Court's own reasoning seriously.
In Free Enterprise Fund v. PCAOB in 2010, the Court wrote: "The Constitution requires that a President chosen by the entire Nation oversee the execution of the laws." And in Seila Law a decade later: "Only the President (along with the Vice President) is elected by the entire Nation."
The argument goes like this: The president has unique democratic legitimacy as the only official elected by all voters. Therefore the president should control the executive branch. This is not actually a good argument, but let's accept the Court's logic for a moment.
If the president is the uniquely democratic voice that should oversee execution of all laws, then what's wrong with an AI system that replicates presidential preferences across millions of decisions? Isn't that the apogee of democratic accountability? Every bureaucratic decision aligned with the preferences of the only official chosen by the entire nation?
This is the unitary executive theory taken to its absurd, yet logical, conclusion.
Solutions
Let's review. We've examined five mechanisms concentrating presidential power: emergency powers creating permanent crisis, perfect enforcement eliminating discretion, information dominance saturating discourse, the national security black box too opaque and fast for oversight, and AI making the unitary executive technologically feasible. Together they create an executive too fast, too complex, too comprehensive, and too powerful to constrain.
So what do we do? Are there legal or institutional responses that could restrain the Unitary Artificial Executive before it fully materializes?
Look, my job as an academic is to spot problems, not fix them. But it seems impolite to leave you all with a sense of impending doom. So—acknowledging that I'm more confident in the diagnosis than the prescription—let me offer some potential responses.
But before I do, let me be clear: Although I've spent the past half hour on doom and gloom, I'm the farthest thing from an AI skeptic. AI can massively improve government operations through faster service, better compliance, and reduced bias. At a time when Americans believe government is dysfunctional, AI offers real solutions. The question isn't whether to use AI in government. We will, and we should. The question is how to capture these benefits while preventing unchecked concentration of power.
Legislative Solutions
Let's start with legislative solutions. Congress could, for example, require congressional authorization before the executive branch deploys high-capability AI systems. It could limit emergency declarations to 30 or 60 days without renewal. And it could require explainable decisions with a human-in-the-loop for critical determinations.
But the challenges are obvious. Any president can veto restrictions on their own power, and in our polarized age it's very hard to imagine a veto-proof majority. The president also controls how the laws are executed, so statutory requirements could be interpreted narrowly or ignored. Classification could shield AI systems from oversight. And "human-in-the-loop" requirements could become mere rubber-stamping.
Institutional and Structural Reforms
Beyond statutory text, we need institutional reforms. Start with oversight: Create an independent inspector general for AI with technical experts and clearance to access classified systems. But since oversight works only if overseers understand the technology, we also need to build congressional technical capacity by restoring the Office of Technology Assessment and expanding the Congressional Research Service's AI expertise. Courts need similar resources—technical education programs and access to court-appointed AI experts.
We could also work through the private sector, imposing explainability and auditing requirements on companies doing AI business with the federal government. And most ambitiously, we could try to embed legal compliance directly into AI architecture itself, designing "law-following AI" systems with constitutional constraints built directly into the models.
But, again, each of these proposals faces obstacles. Inspectors general risk capture by the agencies they oversee. Technical expertise doesn't guarantee political will—Congress and courts may understand AI but still defer to the executive. National security classification could exempt government AI systems from explainability and auditing requirements. And for law-following AI, we still need to figure out how to train a model to teach it what "following the law" actually means.
Constitutional Responses
Maybe the problem is more fundamental. Maybe we need to rethink the constitutional framework itself.
Constitutional amendments are unrealistic—the last was 1992, and partisan polarization makes the Article V process nearly impossible.
So more promising would be judicial reinterpretation of existing constitutional provisions. Courts could hold that Article II's Vesting and Take Care Clauses don't prohibit congressional regulation of executive branch AI. Courts could use the non-delegation doctrine to require that Congress set clear standards for AI deployment rather than giving the executive blank-check authority. And due process could require algorithmic transparency and meaningful human oversight as constitutional minimums.
But maybe the deeper problem is the unitary executive theory itself. That's why I titled this lecture "The Unitary Artificial Executive"—as a warning that this constitutional theory becomes even more dangerous once AI makes it technologically feasible.
So here's my provocation to my colleagues in the academy and the courts who advocate for a unitary executive: Your theory, combined with AI, leads to consequences you never anticipated and probably don't want. The unitary executive theory values efficiency, decisiveness, and unity of command. It treats bureaucratic friction as dysfunction. But what if that friction is a feature, not a bug? What if bureaucratic slack, professional independence, expert dissent—the messy pluralism of the administrative state—are what stands between us and tyranny?
The ultimate constitutional solution may require reconsidering the unitary executive theory itself. Perfect presidential control isn't a constitutional requirement but a recipe for autocracy once technology makes it achievable. We need to preserve spaces where the executive doesn't speak with one mind—whether that mind is human or machine.
Conclusion
I've just offered some statutory approaches, institutional reforms, and constitutional reinterpretations. But let's be honest about the obstacles: AI develops faster than law can regulate it. Most legislators and judges don't understand AI well enough to constrain it. And both parties want presidential power when they control it.
But lawyers have confronted existential rule-of-law challenges before. After Watergate, the Church Committee reforms led to real constraints on executive surveillance. After 9/11, when crisis and executive power claimed unchecked detention authority, lawyers fought, forcing the Supreme Court to check executive overreach. When crisis and executive power threaten constitutional governance, lawyers have been the constraint.
And, to the students in the audience, let me say: You will be too.
You're entering the legal profession at a pivotal moment. The next decade will determine whether constitutional government survives the age of AI. Lawyers will be on the front lines of this fight. Some will work in the executive branch as the humans in the loop. Some will work in Congress—drafting statutes and demanding explanations. Some will litigate—bringing cases, performing discovery, and forcing judicial confrontation.
The Unitary Artificial Executive is not inevitable. It's a choice we're making incrementally, often without realizing it. The question is: Will we choose to constrain it while we still can? Or will we wake up one day to find we've built a constitutional autocracy—not through a coup, but through code?
This is a problem we're still learning to see. But seeing it is the first step. And you all will determine what comes next.
Thank you. I look forward to your questions.
