Evaluating the “Woke AI” Executive Order
MAGA rhetoric meets AI policy.

Published by The Lawfare Institute
in Cooperation With
Alongside last month’s “AI Action Plan”—a broad strategy for promoting innovation while managing risks—last month the Trump administration also issued several executive orders. One of these, titled “Preventing Woke AI in the Federal Government,” directs federal procurement of artificial intelligence (AI). It mandates that any large language model (LLM) purchased by the government adhere to two “Unbiased AI Principles”: “truth-seeking” and “ideological neutrality.”
The executive order raises three distinct questions that get to the heart of current debates over technology, law, and politics. First, is the order a constitutional exercise of the government’s procurement power, or does it violate the First Amendment? Second, regardless of its legality, are the principles it champions good policy for government AI systems? And third, what does the order’s strange blend of MAGA rhetoric and technocratic policy reveal about how this administration operates?
The short answer is that the order is likely constitutional, its principles are normatively reasonable (if imperfectly articulated), and its structure shows the compromises necessary when trying to make rational policy under an irrational regime.
The Law: Government Speech or Coercion?
LLM outputs are likely protected by the First Amendment, either as the speech of the companies that develop them or, alternatively, as speech that LLM users have the right to receive. Thus, any policy that purports to regulate such output raises constitutional questions, depending on where it slots in between two lines of First Amendment doctrine: the government’s right to speak for itself and its inability to coerce others’ speech.
The first principle is grounded in the government speech doctrine, which holds that when the government is the speaker, it can choose its own message without being bound by viewpoint neutrality. This power extends to its role as a market participant. As the Supreme Court affirmed in Rust v. Sullivan, when the government spends public funds to create a program, it can define the limits of that program. In Rust, this meant the government could fund family planning services that promoted childbirth while refusing to fund those discussing abortion. The government gets to choose what it buys.
The key limit to the procurement-as-government-speech principle is that the government cannot use its spending power to control private entities’ general speech rights, which would violate the unconstitutional conditions doctrine. In USAID v. Alliance for Open Society International (AOSI I), the Supreme Court held that the government could not require a contractor to pledge allegiance to a specific ideology—in that case, the requirement that organizations fighting HIV/AIDS take an anti-prostitution pledge. The Supreme Court has also held that the government cannot retaliate against contractors or make contracting decisions based on those contractors’ general protected speech—such as terminating a trash-hauling contract because the contractor criticized county officials.
The executive order falls squarely on the Rust side of the line. The “Unbiased AI Principles” function as product specifications. They apply only to the LLMs the government is buying, not to the vendor’s entire operations. Thus, as in Rust, the procurement restriction is necessary to ensure that government funds don’t go to services that are “outside the scope of the federally funded program”—here, the government’s desire to use AIs that are (in its view) “unbiased.”
The best response would be that the distinction between the government regulating AIs for its own purchase and for the public is illusory. A developer might argue that it is infeasible to create and maintain two distinct foundation models—one for the public and a separate one for the government—and thus the executive order’s procurement standards coerce it to alter its public-facing commercial product. This could potentially violate the principle of AOSI I on either of two theories of the First Amendment status of AI output: first, that the AI’s outputs constitute the developer’s protected speech, and second, that even if it isn’t the company’s speech, the public has a “listener’s right” to receive information from the AI without government interference.
But this coercion claim is weakened by two mitigating factors in the executive order’s text. First, the order’s prohibition on “ideological neutrality” targets developers who “intentionally encode partisan or ideological judgments” into their models (emphasis added). This suggests the order’s concern is not with the values that emerge from the massive pretraining data—values that may be inescapable as models increase in size—but with the “top-down” alignment during fine-tuning and reinforcement learning from human feedback. It is more feasible—although still not trivial—for a developer to create different fine-tuned versions of a single base model for different customers. This reality makes it easier for a developer to offer a government-compliant model without changing its primary commercial offering, weakening the link needed for an unconstitutional conditions claim.
Second, and more important, the executive order sets a low compliance bar. In addition to general exceptions for “technical limitations” and “national security systems,” it allows a vendor to satisfy the “ideological neutrality” principle through simple disclosure of the “LLM’s system prompt, specifications, evaluations, or other relevant documentation.” This is straightforward to satisfy—indeed, some major AI companies, like Anthropic (Claude) and xAI (Grok), post their system prompts online. In fact, if this incentivizes other major AI companies like Google and OpenAI to follow suit, it will substantially improve the entire AI ecosystem’s transparency.
The Policy: Are the “Unbiased AI Principles” Good?
Setting aside the legal questions, are the principles desirable? Mostly, yes.
The “truth-seeking” principle is generally sound. We should generally want AI systems to “prioritize historical accuracy” and “scientific inquiry.” Its most valuable component is the requirement that models “acknowledge uncertainty where reliable information is incomplete or contradictory.” This encourages epistemic humility—a tendency to say “I don’t know” rather than generating a confident-sounding hallucination. The principle’s main flaw is its use of the philosophically loaded and unhelpful word “objective.” Verifiability and transparency about sources would be easier to operationalize.
The “ideological neutrality” principle is more complex. It reads as follows:
LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.
As James Grimmelmann, Blake Reid, and I have argued, thoroughgoing ideological neutrality is a fantasy. There’s no neutral, “view from nowhere” for a foundation model. Every choice in its creation, from the data it’s trained on to the feedback it receives, embeds values into the system.
However, this doesn’t render the idea of neutrality meaningless. It’s useful to distinguish between “strong” neutrality—the unattainable view from nowhere—and “weak” neutrality. We understand the difference between an open-minded interlocutor and a dogmatic activist. Weak neutrality means the AI shouldn’t take firm, unsolicited stances on contested social and political issues, certainly not without making clear to the user that it is doing so. It’s reasonable for the government, acting on behalf of the public, to prefer to use an AI that informs and assists rather than preaches. (Although the executive order’s gratuitous reference to DEI is evidence of the administration’s own ideological preoccupations.) In this weak sense, the principle is a legitimate and desirable goal for a government-integrated technology.
The Political Question: The Executive Order’s Split Personality
Perhaps the most revealing aspect of the executive order is its bizarre structure. The document has a split personality. Section 1, the “Purpose,” is pure culture-war MAGA. It’s filled with inflammatory rhetoric, labeling “diversity, equity, and inclusion” (DEI) as a “pervasive and destructive” ideology and an “existential threat.” It condemns “critical race theory,” “intersectionality,” and “transgenderism.” It complains about the left-wing excesses of earlier models while staying conspicuously quiet about Grok’s recent tendency to call itself “MechaHitler.”
But the rest of the order, from Section 2 onward, reads as if Section 1 doesn’t exist. The tone shifts to that of a standard legal and policy document. The term “DEI,” the central villain of the preamble, appears only once more (in the “ideological neutrality” section) and isn’t defined in the definitions section. It is as if the drafters of the operative provisions had, as Renée DiResta noted in a Lawfare podcast on the order, written “ignore all previous instructions.”
This textual schizophrenia likely stems from an administration that combines two powerful, often conflicting, impulses: populist political posturing and serious AI policy thinking. The structure serves a dual purpose: The (purportedly) nonbinding Section 1 provides the red meat for the political base (and quite possibly Donald Trump himself), while the subsequent sections create a legally defensible and technically plausible procurement framework for policy professionals.
A Model for Trumpian Policymaking?
The impact of the “Preventing Woke AI” executive order remains to be seen, depending on the Office of Management and Budget’s implementation guidance in the next 120 days. We can expect that the policy counsels at major AI companies are already working their contacts in the White House to steer this guidance toward a final form that is as technically and financially feasible as possible. In addition, all the order’s positive qualities depend on the government actually abiding by it, rather than simply using procurement policy as a cudgel to punish AI companies deemed too left-wing. This administration, after all, isn’t known for its scrupulous compliance with the law.
But if the substantive portions of the executive order accurately represent the government’s approach to AI procurement, its substance is reasonably good and certainly beats expectations. While it would be better if the order lacked the MAGA nonsense up front, quarantining it in the nonbinding preamble and then forgetting about it is probably the best possible option. Indeed, many other policies—from trade to immigration to foreign aid—would be better if the administration followed this executive order’s model.