Cybersecurity & Tech

What Comes Next in AI Regulation?

Kevin Frazier
Monday, July 28, 2025, 1:01 PM

While the administration’s AI Action Plan received a surprisingly positive reception, its ambitious scope may make implementation difficult.


A vortex of blue rectangles of light (Photo: Pixabay, https://tinyurl.com/38hs9dwr, Free Use)

Published by The Lawfare Institute
in Cooperation With
Brookings

A few days removed from the release of the AI Action Plan, it’s now possible to take a slightly more nuanced perspective of what many observers have heralded as “not bad.”

Americans for Responsible Innovation President Brad Carson, for example, regarded the plan as “cautiously promising.” Michael Horowitz of the Council on Foreign Relations characterized it as aligned with “an ongoing bipartisan approach to the U.S. leadership in AI.” The Atlantic Council labeled it a “deliberative and thorough plan.” Of course, some took a less favorable view—the New York Times promptly ran a summary under the headline, “Trump Plans to Give A.I. Developers a Free Hand.” Still, a scroll through X, Bluesky, and LinkedIn in the hours following the publication of the long-awaited document returned a fairly uniform, positive assessment.

This general embrace speaks both to the state of the artificial intelligence regulatory space as well as to the contents of the plan itself. On the former, just a few weeks ago, the fierce battle around the AI moratorium proposed and later voted out of the “One Big Beautiful Bill” suggested that partisanship had finally and firmly entrenched itself in AI governance. The fact that this plan managed to earn support from both sides of that prior debate has several possible explanations. It could be the case that seemingly strong positions in AI policy debates are not as deeply entrenched as they seem, making stakeholders more receptive to new evidence or shifting circumstances. It may also be true that the expansive plan adequately addressed the key concerns of the many diverse camps in the broader AI debate.

While a full summary of the plan and related executive orders merits a series of posts, I’ll cover some of the plan’s most important provisions. After that, I’ll explore how the prioritization and execution of those provisions will determine whether this represents a lasting stability in AI policy or merely a temporary moment of appreciation for a plan that, while well executed, is at times vague and noncommittal.

What Does the Plan Actually Include?

Like a good buffet, there’s something for everyone in the plan. Across 90 policy recommendations, three core pillars, and three fundamental principles, just about every stakeholder group is likely to find something to agree with. The three pillars—AI innovation, AI infrastructure, and AI international diplomacy and security—incorporate concerns for American workers and communities, the infusion of ideological bias into models, and the possibility of catastrophic risks arising from misuse of leading AI models.

AI Safety

Those fearful of catastrophic harms posed by AI, such as sophisticated cyberattacks and bioweapon development by non-state actors, could tout several provisions as indicative of a safety-conscious White House. One of the principle barriers to reliable and trustworthy AI systems is the proverbial “black box” behind AI development. The plan takes this issue into account in numerous recommendations, including initiating a project led by the Defense Advanced Research Projects Agency (DARPA) to study interpretability and AI control systems and tasking the Department of Defense, Department of Education, Center for AI Standards and Innovation (CAISI), and Department of Homeland Security with hosting an AI hackathon to reward academics who create the best test of AI system transparency and effectiveness. 

The plan also addresses a widespread concern among the safety community related to the use of AI models to develop chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons. Underpinning this fear is the very real possibility of bad actors acquiring advanced AI systems and directing them to assist with the rapid development and deployment of CBRNE weapons. To mitigate that possibility, the plan directs CAISI to test AI systems for national security risks, monitor the possibility of bad actors using foreign systems toward such ends, and take the necessary steps to recruit and retain experts to oversee such work. 

The plan’s safety recommendations, in addition to other cybersecurity measures it outlines, explain why the safety community generally applauded it. Having spent the days following the AI Action Plan release at numerous AI-related happy hours on the Hill, I can confirm that many of these provisions pleasantly surprised individuals and institutions worried about the risks of AI. Their surprise was warranted: The executive order that gave rise to this plan specified that it is the policy of the United States “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Subsequent remarks by Vice President Vance chastising the European Union for regulations that prioritized the sorts of safeguards that ended up in this plan justify the fact that they were caught unawares.

AI Innovation

The plan contains many regulations that fit more squarely with the notion of “AI dominance”—affirmatively charting a course for AI acceleration, rather than just mitigation. For those concerned about the regularly discussed “AI race” with China, the plan doubles down on strategies intended to accelerate AI advances and increase AI adoption across the U.S. public as well as by the U.S. military. Among myriad recommendations targeted toward AI innovation, a call for increased open-source and open-weight AI model development is particularly noteworthy.

Not long ago, closed models were regarded as the sole means of safely pushing the AI frontier. DeepSeek changed the political discourse on this point; China’s surprise AI upstart sparked fear across the Hill that its highly capable open-source model (or some variant thereof) could become the default model the world over. The Hill subsequently pulled a 180 with respect to U.S. development of open source. This plan cements that turnaround through several narrower recommendations. For example, it charges several agencies with scaling up the National Artificial Intelligence Research Resource Pilot, which will allow researchers and academics to more easily access the compute necessary to develop and test AI models. Relatedly, the plan includes a push for greater adoption of open-source models, led by the National Telecommunications and Information Administration. 

Other key recommendations related to innovation range from creating regulatory sandboxes that will lower barriers to testing novel AI tools to kick-starting academic research into how AI may improve productivity across various sectors.

AI Workforce

Actors concerned about the disruptive consequences of such advances can also find something to applaud in the plan. Numerous provisions build the first principle mentioned in the plan: “[E]nsuring that our Nation’s workers and their families gain from the opportunities created in this technological revolution.” Realization of that principle centers on two key recommendations. The first is “Empower American Workers in the Age of AI.” This sets out recommendations intended to “deliver more pathways to economic opportunity for American workers.” Pursuant to this goal, the Department of Labor, Department of Energy, National Science Foundation (NSF), and Department of Commerce received instruction to incorporate AI literacy programming into workforce development initiatives as well as in the classroom. The plan also calls for more robust and transparent research into how AI is changing labor demand across various industries. This will presumably inform updated guidance and rules reflecting trends in job displacement. Relatedly, displaced workers may soon find it easier to access retraining programs thanks to the plan’s recommendation that the Labor Department allocate extra discretionary funds to cover such efforts. 

The second recommendation is “Train a Skilled Workforce for AI Infrastructure.” The plan details a thorough set of recommendations to hasten the nation’s ability to meet the incredible energy demands of the AI industry. Construction of new power plants, data centers, and semiconductor fabrication plants will hinge on availability of plumbers, engineers, and other specialists. These training-intensive professions receive special attention in the plan. The Labor Department and Commerce Department are tasked with identifying which professions can bring about the plan’s bold vision for physical infrastructure construction. Those two agencies as well as the Department of Energy, Department of Education, and NSF are also directed to partner with states to create “industry-driven programs that address workforce needs tied to priority AI infrastructure occupations.” This workforce initiative even reaches down to middle and high school students. One recommendation maps out programs intended to spark interest among young Americans in these AI infrastructure occupations via “pre-apprenticeships.” 

Controversial Provisions

That said, some provisions elicited more unified opposition. Under a general recommendation that frontier AI align with “free speech and American values,” the plan announces an update to federal procurement guidelines that limits contracting with developers that fail to “ensure their systems are objective and free from top-down ideological bias.” This recommendation, combined with a related executive order on “Woke AI,” has drawn scrutiny from several actors. Some question whether such objectivity is technically feasible. Others raise legal concerns—namely, the possibility that this update violates the First Amendment.

Another lightning rod took the form of a proposed limit on federal funding for states that enforce “burdensome AI regulations” that would result in such federal support being “wasted.” This recommendation resurfaced many of the still-smoldering AI moratorium debates and presents related legal questions.

Several other outlets have done tremendous work detailing these and other recommendations. As the dust has settled and podcasts have aired, however, it is perhaps more important to turn to what the plan—and the public’s response to it—suggests about the future of AI discourse.

How Does This Reflect the Current AI Discourse?

The AI Action Plan’s reception reveals as much about the current state of AI governance as it does about the plan’s substantive merits. Beyond the specific policy recommendations, the document provides a picture of where American AI discourse has settled after years of debate. Three key themes emerge from both the plan’s contents and the surprisingly broad support it has garnered across traditional ideological divides.

First, innovation with speed bumps represents the new consensus framework, displacing earlier calls for AI development moratoriums or comprehensive pauses. Any notion of halting AI development is definitively off the table. A broad base of political support (albeit not ubiquitous) has formed around the idea that AI leadership constitutes both a national security and an economic security imperative, with the plan’s opening declaration that “[w]inning the AI race will usher in a new golden age” serving as the bipartisan baseline for policy discussion. The remaining questions center on how and when to apply regulatory speed bumps and, perhaps most critically, who gets to determine their location and size.

This framework acknowledges that technological momentum will continue while creating structured moments for evaluation and course correction—think regulatory sandboxes instead of regulatory barriers and conditional deployment rather than deployment prohibitions. The speed bump approach reflects a fundamental shift in how American policymakers think of AI governance. Whereas previous regulatory frameworks often presented binary choices between permission and prohibition, the plan operationalizes a more nuanced philosophy that assumes continued acceleration while building in systematic checkpoints. The plan’s emphasis on a “try-first” culture and rapid deployment testing signals that regulatory intervention will focus on managing development trajectories rather than questioning development itself. 

Yet this consensus obscures deeper tensions about regulatory authority and democratic accountability. The plan distributes speed bump determination across dozens of agencies—from the National Institute of Standards and Technology’s (NIST’s) technical evaluations to the Office of Management and Budget’s (OMB’s) procurement guidelines to the Defense Department’s hackathon initiatives—without clearly establishing coordination mechanisms or democratic oversight processes. Ultimately, this creates a technocratic governance model that favors expert judgment over public input—a situation driven in part by the technology’s complexity and by the political system’s limited ability to manage the novel challenges posed by such complexity.  

Another theme is that China now effectively operates as America’s AI muse, shaping both the urgency and the boundaries of domestic AI policy. The plan’s introductory quote illustrates this dynamic, with President Trump declaring it “a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance.” This framing transforms Chinese AI advancement into the primary variable determining American regulatory calculations—the aforementioned speed bumps hinge directly on how China’s efforts progress. Recent releases of impressive models like DeepSeek r1 and Kimi K2 have shattered comfortable assumptions about American technological superiority, demonstrating that China’s AI capabilities rival or potentially exceed those of leading American companies. The shock of DeepSeek’s emergence reverberated through Washington precisely because it revealed how quickly perceived advantages could evaporate. No politician wants to find themselves responsible for allowing China to keep pace or even causing the U.S. to fall behind.

The final theme is that the moratorium debate remains the massive elephant in the room, casting a shadow over the plan’s otherwise confident pronouncements about American AI leadership. Though Congress has largely shut down for the summer, moratorium advocates continue working behind the scenes to craft language that could secure majority support when the August recess comes to a close. The persistence of moratorium advocates speaks to the tension that the plan acknowledges but does not resolve: the gap between those who view rapid AI advancement as an unprecedented opportunity and those who see it as an existential threat necessitating state intervention in the absence of congressional action.

President Trump largely avoided taking sides during the initial moratorium debates, maintaining strategic ambiguity that allowed the administration to focus on other priorities while leaving Congress to work through the contentious issues. Whether this plan signals a more aggressive White House posture going forward remains unclear. The plan’s silence on congressional moratorium proposals may prove temporary—future Chinese breakthroughs or domestic AI incidents could force the administration to engage more directly with Congress on the extent to which states can dictate what many regard as a matter of interstate commerce. 

What Are the Key Questions Going Forward?

While the plan’s breadth earned widespread initial approval, its comprehensiveness also raises practical concerns about implementation and sustainability. The document’s 90 policy recommendations span virtually every aspect of American AI governance (with the notable exception of intellectual property—though Trump voiced support for the use of copyrighted material for AI training in a speech announcing the plan); yet this ambitious scope inevitably creates challenges that will shape the initiative’s ultimate effectiveness. Three critical questions emerge as determinative of whether the plan represents a durable policy framework or merely an aspirational blueprint.

Prioritization presents the most immediate challenge, as political capital and administrative bandwidth cannot possibly support simultaneous advancement of each recommendation. The sheer volume makes clear that some initiatives will advance rapidly while others languish on bureaucratic back burners. Existing infrastructure related-initatives led by the Department of Energy may mean that infrastructure-related proposals command the strongest support, given their alignment with traditional congressional priorities around job creation and economic development. The plan’s emphasis on streamlined permitting for data centers and energy infrastructure, for instance, likely resonates with lawmakers who have long championed similar reforms for conventional infrastructure projects. Conversely, more contentious provisions—such as limiting federal funding for states with burdensome AI regulations—may face significant pushback from both Democratic governors and Republican advocates of federalism.

Institutional capacity constraints pose equally formidable obstacles to the plan’s ambitious timeline and scope. The extensive roster of agency assignments raises fundamental questions about whether federal departments possess the personnel, expertise, and organizational infrastructure necessary to execute their newly assigned responsibilities. NIST, for example, finds itself tasked with everything from revising AI risk management frameworks, to conducting evaluations of Chinese frontier models, to establishing new technical standards for high-security data centers—a portfolio that would challenge even a fully staffed organization operating at peak efficiency.

Yet many of these agencies, including NIST, have experienced significant personnel losses and declining morale in recent years, leaving them ill equipped to absorb substantial new mandates. Relatedly, the Department of Commerce, which must simultaneously manage existing trade and export control responsibilities while building new capabilities around AI evaluation and international technology diplomacy, faces a 2.5 percent budget cut under the House’s current proposal. The plan’s success may ultimately hinge less on policy design than on whether these institutions can rapidly acquire the human capital and resources that complex AI governance demands.

Public attention represents perhaps the most volatile variable affecting the plan’s long-term viability. Despite fascination with AI governance in D.C., San Francisco, and Austin, high-level debates around the trajectory of frontier models remain distant from most Americans’ daily concerns, ranking well below health care, economic security, and traditional political issues in polling data. This disconnect creates a precarious foundation for sustained policy commitment, particularly given the plan’s reliance on long-term institutional investments and complex regulatory frameworks that require years to mature.

Should public sentiment shift toward skepticism or outright opposition to AI development—perhaps triggered by high-profile failures, job displacement, or privacy breaches—the administration may find itself politically compelled to abandon or substantially modify these ambitious plans. Historical precedent suggests that emerging technologies can rapidly transition from objects of public optimism to sources of widespread anxiety, as occurred with nuclear power following the Three Mile Island accident.

The plan’s current political sustainability may depend on public indifference rather than ideologically driven support for distinct visions of AI policy, a fragile equilibrium that could dissolve if AI development produces visible negative consequences or becomes more explicitly entangled with broader cultural and economic grievances.

***

The AI Action Plan’s surprisingly broad reception demonstrates two things: the current state of American AI governance and its inherent fragility. While the plan successfully bridges competing visions for AI’s future—balancing innovation with safety, and worker interests with adoption—this very comprehensiveness might be its Achilles’ heel. The plan’s 90 recommendations feel less like a unified strategy and more like a political compromise, delaying tough decisions about priorities, trade-offs, and democratic accountability. Its long-term success won’t hinge on its ambitious goals, but on the administration’s ability to implement them while grappling with rapid technological change, geopolitical pressures, and unpredictable public sentiment. Ultimately, the plan might be remembered less for what it achieves and more for what it reveals about the immense challenge of governing transformative technology in a volatile and accelerating world.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
}

Subscribe to Lawfare