Dominating AI Requires Understanding AI
AI dominance will require more than faster models—it will require breakthroughs in understanding, testing, and securing frontier AI.
A fundamental tension previously defined the artificial intelligence (AI) policy landscape: Proposals for the government to oversee testing and evaluation of frontier AI models were regarded as antithetical to rapid development and diffusion of the latest and greatest AI. This perceived trade-off split AI stakeholders into two camps: so-called doomers and accelerationists. The former lobbied for variants of predeployment testing of models by the government or some third party. The latter, including the Trump administration, advocated for American “global dominance” through limited government intervention until evidence indicated that existing law proved unable to steer AI efforts toward the nation’s broader economic and national security goals.
That era of AI policy looks like it has come to an end. The Trump administration has signaled—albeit inconsistently and ambiguously—its interest in a new path: dominating AI by understanding it.
Recent reporting suggests that a planned executive order will facilitate greater information sharing between AI labs and government cybersecurity programs to accelerate identification of any risks posed by new models and, by extension, mitigation of those risks. Presumably this order is grounded in White House officials taking the cybersecurity risks of frontier AI models seriously. They may likewise be increasingly aware of the biosecurity risks presented by leading models. As models become more capable at problem solving and lengthy research efforts, “AI may soon grant people extremely dangerous powers: to synthesise viruses, generate novel neurotoxins or assemble omnicidal ‘mirror life.’”
The planned order and the underlying policy perspective may also reflect an understanding that AI adoption of the most powerful AI models—including in the military and across the private sector—will lag until private and public stakeholders are confident that the technology will operate in a reliable, effective, and controllable fashion. For somewhat obvious reasons, models with any of the cyber- or biosecurity risks mentioned above cannot be made publicly available absent sufficient and reliable technical safeguards and broader societal readiness.
There’s also a chance that the administration’s AI policy leaders have connected the dots between technical AI progress—learning more about how AI works and why—and AI dominance—developing and deploying the most capable models at a pace and scale that exceeds that of our adversaries. Recognition of this point among the president’s AI staffers is suggested by the number of meetings involving senior White House staffers and AI CEOs in recent days and weeks.
Regardless of the reasons behind this apparent pivot, if this is an accurate characterization of the Trump administration’s current thinking, then a series of straightforward policy maneuvers can advance a “dominating AI by understanding AI” agenda.
America Is Underinvesting in the Science of AI
Federal support is necessary to scale up existing efforts to achieve breakthroughs in the “science of AI.” As it stands, discrete labs are pursuing disjointed research agendas. They have still managed to achieve impressive outcomes. Anthropic, for instance, recently announced a method to translate a model’s internal processes—a numerical operation—into natural-language text. This translational task may unlock critical insights into how models function and when and why they may behave in an unexpected, dangerous, or deceptive fashion. When it comes to making sure models are safe to release, such insights will be highly valuable. Yet, as Anthropic noted, this is a resource-intensive effort; the translation process requires vast amounts of compute.
OpenAI is likewise allocating some of its staff and resources to related research inquiries. It is closely studying how to align agentic AI tools—tools that can perform extended tasks with little to no user oversight—to the user’s preferences and directions, for example. However, the lab has previously been called out for allocating a minimal amount of its computational resources to this sort of study. While it’s unclear what percentage of the company’s resources go toward such efforts, the fact that OpenAI and others have been finding ways to channel more resources toward their most profitable ends indicates that it’s likely not a very great amount.
The labs are not alone in trying to make the most of scarce resources. Researchers affiliated with nonprofits and universities are also working hard to unlock advances in understanding AI, albeit with inadequate support. One example of a promising line of research: A team with the University of Louisiana at Lafayette tapped into the National AI Research Resource (NAIRR) to study cybersecurity issues posed by AI coding assistants. NAIRR is a public-private collaborative program that makes compute, data, and training available to educators and researchers around the country. Over the course of a two-year pilot, it has facilitated more than 600 research and education projects that may have otherwise not been completed.
Despite significant private in-kind donations to NAIRR, the Resource is meeting only a fraction of the demand for AI resources and the broader need for basic AI research and development (R&D). “For every dollar allocated to NAIRR, the private sector is investing roughly $23,000 in AI,” according to recent analysis by the Center for a New American Security. While parity in public and private AI investment is unrealistic and unnecessary, the scale of the gap suggests an underinvestment in the basic R&D that has historically led to significant, societally beneficial innovation. In fact, the National Security Commission on AI called for $32 billion in nondefense AI R&D by fiscal year 2026, yet the investment has been about a tenth of that.
The upshot is that private and public research on the science of AI is necessary and ongoing but would benefit from additional coordination and backing. Congress is in the best position to steer the U.S. toward this policy outcome. In previous technological eras that required significant R&D, it was Congress that stepped in to establish a broad, enduring effort. When the U.S. started to lag behind Japan in semiconductor production, Congress allocated nearly $1 billion to a concentrated public-private research project to catch up. Lawmakers have plenty of proposals before them to chart a similar path in the AI space. Here are three actions that Congress could take: passing the CREATE AI Act, which would codify NAIRR and increase its funding; passing the AI Talent Act, which would bring more AI experts into the federal government; and appropriating more funds to the National Institute of Standards and Technology, which is home to the Center for AI Standards and Innovation.
It is unclear whether Congress will act in a similar fashion in the near future with respect to AI R&D. In the interim, the executive branch can still play a meaningful role in advancing a “dominance by understanding” agenda.
What the Executive Branch Can Do Now to Advance the Science of AI
Three immediate steps by the Trump administration would meaningfully assist with sustaining the U.S.’s lead in the science of AI.
First, foster more coordination and information sharing among the leading labs on methods to mitigate cyber- and biosecurity risks from their models as well as those from models being designed and deployed by our adversaries. This step should be paired with agreements over when and under which conditions to mutually agree not to deploy models with certain capabilities. This latter part is essential to reducing competitive pressures that may otherwise lead labs to deploy a model with capabilities for which the broader society is unprepared. Section 708 of Title VII of the Defense Production Act (DPA) allows the president to confer with private actors and create such voluntary agreements and plans of action that provide for the national defense.
The risks posed by leading AI models certainly implicate the national defense—especially because the statutory definition of national defense stretches to include matters such as protection of critical infrastructure. It has become apparent to the federal government that cyber threats may imperil our utilities, dams, hospitals, and other key institutions. The Cybersecurity and Infrastructure Security Agency (CISA) recently urged such entities to more regularly and robustly plan for operating offline due to increased concerns around cyberattacks. AI advances may only exacerbate the need for such planning and forethought.
Exercise of this authority also turns on whether the attorney general, in consultation with the chairman of the Federal Trade Commission, has confirmed that the purpose of the voluntary agreement could not have been reasonably achieved through an alternative path that raised fewer anticompetitive concerns. This additional hoop is the product of the fact that participation in a voluntary arrangement under Section 708 is afforded a defense in a civil or criminal antitrust litigation that they were participating in an initiative launched by the president.
If the attorney general were to undertake such an effort with respect to AI, they would need to determine whether existing information sharing mechanisms among labs suffice. This inquiry would lead them to the Frontier Model Forum (FMF)—a nonprofit that assists leading labs with three tasks: establishing best practices for AI safety and security; promoting the science of AI; and fostering information sharing between labs, researchers, and other AI stakeholders. While the FMF can lawfully enable labs to discuss standards and best practices, such as evaluation methodologies, if the forum started to tackle questions about how labs should jointly coordinate around whether and how to release models with certain capabilities, then violations of antitrust law may become a concern. That’s precisely why Section 708 may be properly involved in this space. When firms share information like future product road maps, anticipated release schedules, and forecasts of capabilities, they are engaging in behavior that may hinder competition by creating a more coordinated, concentrated AI ecosystem. Voluntary arrangements established under Section 708 shield labs from undue exposure to antitrust litigation—an important consideration given that antitrust regulators have already been paying close attention to the AI space.
Second, bring CISA back up to full speed. The administration previously oversaw the decimation of CISA’s workforce. Budget cuts caused the agency to slash about a third of its workforce—1,000 employees or so. The remaining staff was then furloughed during the recent 75-day shutdown of the Department of Homeland Security. This had the obvious effect of further constraining its capabilities. The new plan is to hire more than 300 “mission-critical” positions. A surge in capacity could not come at a better time. Acting CISA Director Nick Andersen is spearheading “CI Fortify”—a broad operation to prepare critical infrastructure organizations for destructive cyberattacks.
Increased CISA resources and staffing is also a key component of the agency’s making full use of the Cyber Response and Recovery Fund. If and when the director establishes that a significant incident is likely to result—such as an incident that results in demonstrable harm to national security interests or the public health and safety of the people of the United States—they may direct financial support to specific entities for “protecting the assets of the entity, mitigating vulnerabilities, and reducing the related impacts.” Such a harm may be identified sooner rather than later as AI labs continue to progress at a speedy clip. Yet CISA’s depleted staff may not effectively steward dispersal of that assistance. The administration should kick-start a concentrated hiring spree above and beyond the 300 placed positions.
Third, again relying on the DPA, perform a quarterly survey of frontier AI labs to establish the capabilities of their most sophisticated models, including those that have been deployed only internally. Section 705 permits the president to survey companies to establish the capabilities of the nation’s industrial base. The gathered information may be kept confidential. The federal government can then leverage that information to shape policy and allocate resources. AI labs may not always disclose the capabilities of the models they’re using internally—leaving the government in the dark as to where the frontier of AI actually lies. Regular collection of this information by the executive branch can reduce the odds of the federal government being caught flat-footed when a lab subsequently announces a model with significant capabilities. Moreover, this will help the government assess the extent to which the U.S. remains ahead of its adversaries in developing frontier AI models.
This is far from an exhaustive list of measures that can shift the nation toward a “dominance by understanding” posture. However, they are a good place to get started and can help inform future laws, regulations, and policies.
Conclusion
The transition from “AI dominance” full stop to a more nuanced policy stance is an overdue shift. Though many Americans use AI and U.S.-based labs continue to lead the world in developing the most sophisticated models, public opposition is pronounced. A growing backlash and broader sense of distrust in AI threatens the administration’s AI aspirations. Numerous infrastructure projects critical to an ongoing buildout have been slowed, stalled, or stopped. AI adoption remains uneven. Meanwhile, China continues to make steady progress on developing its own AI ecosystem, including science-of-AI initiatives. Just last month, the Chinese government announced a nationwide AI education mandate that promises to increase general AI literacy as well as develop an even greater number of AI experts.
The lesson is straightforward: The nation that best understands frontier AI systems—their capabilities, limitations, risks, and reliability—will likely be the nation best positioned to deploy them at scale. That understanding will not emerge automatically from market competition alone. It will require deliberate investments in basic research, stronger coordination between government and industry, and institutions capable of translating technical insight into public trust and operational readiness.
A greater understanding of AI—how it works, what it’s capable of, how best to use it in transformative and mundane ways—should therefore be viewed not as a constraint on AI development, but as a prerequisite for sustained AI leadership.
