Cybersecurity & Tech

Is It Time for an AI Expert Protection Program?

Kevin Frazier
Tuesday, July 1, 2025, 1:00 PM

AI experts face security risks as geopolitical targets. It’s time to consider protection programs similar to witness security to safeguard critical talent.

Fingerprint on keyboard. (CC 1.0 UNIVERSAL; https://creativecommons.org/publicdomain/zero/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: This article was informed by participation in the AI on the Battlefield event hosted by the Army Cyber Institute in June 2025.

Five inputs dictate the speed of progress in artificial intelligence (AI): compute, data, algorithms, energy, and talent. With respect to the U.S.’s AI inputs, the first four are guarded from foreign interference through a variety of legal and physical security mechanisms. The fifth, talent, is dangerously exposed. To be more blunt, the recent Israeli attacks on at least 10 Iranian nuclear scientists makes two things clear: First, it is well recognized that technological expertise has a tight connection with geopolitical power; and, second, elimination of those experts can stall a nation’s progress, rendering them a strategic target. 

The rationale behind Israel’s decision to target individuals working on advanced weapons systems is not new. Other countries have long been curating and acting on similar lists of their adversaries’ leading scientists. Since 1944, more than 100 nuclear scientists have been “targeted,” defined by Jenna Jordan and Rachel Whitlark as scientists having been captured, threatened, injured or killed. This sort of targeting can deprive militaries of expertise and set back a nation’s technological progress by zeroing in on scientists for whom there are few substitutes. 

Viewed narrowly, this history has little to do with AI. But this myopic perspective misses the parallels between nuclear scientists and AI experts. Important differences exist between these two populations: The former are typically state employees tasked with working exclusively on that country’s weapons program; the latter are generally privately employed and spend their time on both commercial and defense products. What’s more, the underlying technologies are distinct. Nuclear weapons have unquestioned destructive capabilities. It’s less generally known that AI can likewise provide militaries with clear advantages. Yet, from Ukraine to Israel to the United States, militaries increasingly rely on AI to bolster their weapons systems and general warfighting efforts. 

Like nuclear scientists, AI experts are few in number and great in military value. This raises the question of what’s being done, if anything, by the U.S. government in its own capacity as well as in collaboration with private actors to ensure the safety of AI experts who may not realize they’ve become strategic assets.

The Value of AI Experts

America’s pursuit of AI dominance requires addressing the relative dearth of formal security measures for the individuals actively driving the field forward. American AI industry leaders—such as Fei Fei Li, founding co-director of Stanford’s Human-Centered AI Institute; Sam Altman, CEO of OpenAI; Alexandr Wang, founder of Scale AI; Ilya Sutskever, co-founder of OpenAI and CEO of Safe Superintelligence Inc.; Mira Murati, former CTO of OpenAI and founder of Thinking Machines Lab; and Andrej Karpathy, former senior director of AI at Tesla and OpenAI co-founder—have outsized potential to dictate America’s AI future. The value of their expertise can be inferred from several data points. One imperfect piece of evidence that a select few AI experts have especially valuable insights and influence over the domain comes from Altman’s rapid return to the helm at OpenAI following being dismissed by the board. Employees made clear that they regarded him as uniquely capable of leading the AI firm.

Most recently, Meta seems to have estimated that Wang’s expertise is worthy of incredible investment. CEO Mark Zuckerberg launched a $15 billion AI superintelligence team tasked with improving the company’s AI prospects—a headline-generating maneuver that included luring Wang, previously the CEO of Scale AI, to the initiative.

And, perhaps most shockingly (to those unfamiliar with massive tech valuations), Ilya Sutskever has raised billions of dollars for his startup, Safe Superintelligence Inc., with nothing more to show publicly than a sparsely populated website

Many other individuals likely belong in this camp of people with nearly unparalleled AI expertise. According to Altman, Meta recently offered many OpenAI staff members $100 million signing bonuses—a somewhat unbelievable figure that nevertheless shows the economic significance of leading AI talent. 

The significance of a relatively small AI research community is not just an American reality. Research by the Hoover Institution revealed that about a quarter of the researchers behind China’s DeepSeek had spent some time at U.S. academic institutions. Given that just 200 researchers led the company’s shocking AI progress, it stands to reason that the U.S. could have delayed DeepSeek’s efforts by monitoring those 50 individuals and impeding their return to China. This is not an endorsement of that tactic but, rather, an illustration of the transformative impact of a tiny group of AI researchers.

Whether or not you regard today’s AI as a bubble or overhyped, these examples indicate that a small community of experts have tremendous economic significance and, likely, unique insights into AI development that, if undercut, could delay advances in the space. Put differently, even if it’s true that many AI startups are overvalued, a strike on their leadership could derail contracts between those companies and the U.S. military and cause widespread economic disruption. Consequently, this small community is very much of national security and geopolitical interest.

A Very Brief History of Targeting Experts

If recent history is any guide, the governments of U.S. adversaries likely have detailed and comprehensive lists of the most talented AI experts here in the U.S., around the world, and within their own borders. 

Planned and successful killings of scientists involved in weapons development have taken place since at least the 1940s, though the vast majority of reported efforts have occurred in recent decades. A member of the Manhattan Project urged Robert Oppenheimer to consider the assassination of a German scientist working on an atomic bomb for Hitler. In 1980, Yahia El-Meshad, the leader of Iraq’s nuclear program, was found beaten to death in his Paris hotel room. Four Iranian nuclear scientists were allegedly assassinated between 2010 and 2012. 

Fast forward to earlier this month. Israel tipped its hand with respect to its focus on the leading experts behind Iran’s nuclear program. Israeli forces killed nine of Iran’s leading nuclear scientists. It seems highly probable that Israel has a much longer target list of potential Iranians with special significance to that country’s nuclear program.

Again, though some may doubt AI forecasts, officials such as Vice President Vance, Chinese President Xi Jinping, and others around the world, including French President Emmanuel Macron and Indian Prime Minister Narendra Modi, have reached a different conclusion. Their respective goal of leading the AI competition suggests a willingness to go to extreme lengths to curtail the progress of their adversaries. That may warrant increased focus on protecting AI experts at home and abroad.

It may be the case that some of the most well-known AI experts, such as Altman, already have extensive security details. OpenAI was recently hiring for an “executive protection operator.” To the extent others have such “operators,” the increased protection examined here may be achieved by simply facilitating more information between private security teams and relevant U.S. officials focused on protecting all inputs—talent included—in the nation’s defense efforts. Less well-known AI experts and experts at less well-funded startups, however, may lack such private protection and nevertheless find themselves being targeted.

The Anomalous Failure to Prioritize Protection of a Critical AI Input

A glimpse at the protections afforded to the other AI inputs exposes as a major policy blunder the dearth of protections against adversaries targeting experts. Compute, for instance, has been subject to various export control regimes. The Biden administration developed a complicated framework to reduce the odds of compute leaking from a trusted ally into the hands of an adversary via the gray market. Though the Trump administration rescinded that rule, observers expect that a similar set of principles—namely, a focus on preventing chips from crossing country lines—will undergird whatever rule follows. Many observers would like to see monitoring of compute go even further, perhaps by enabling location tracking on chips. 

Data is held close by major labs, which rely on leading cybersecurity techniques to prevent competitors and adversaries from accessing their vast droves of information. Congress is actively weighing requirements to raise the minimum acceptable level of data security for labs. 

Algorithms are likewise shielded by cybersecurity best practices in addition to trade secret law and employment law. Individuals face stiff ramifications if they share this proprietary information with third parties. Though some allege that the tendency among AI lab workers to congregate at Bay Area parties undermines these protective measures, that flaw reflects a lack of enforcement more than a lack of a security framework. 

Finally, the nation has long regarded defense of various sources of energy as a key priority. The resources and political attention spent on defending power plants, transmission lines, and other aspects of the energy sector are likely justified given an uptick in foreign efforts to attack this critical infrastructure.

No clear strategy exists to protect the nation’s AI experts despite clear signs of increased political violence and ongoing targeting of experts by other nations.

Different Approaches to Protecting AI Experts

The aforementioned individuals on my short, incomplete list of AI experts share a common characteristic: They do not work for the government. This poses a tricky question about the extent to which public resources should go toward defending individuals working in the private sector. 

A somewhat obvious approach would look like treating AI experts in a fashion similar to individuals in the witness protection program. While this model is not a perfect fit, several aspects may warrant mirroring. The U.S. Marshals Service is tasked with protecting the health and well-being of government witnesses as well as their dependents. Marshals provide around-the-clock security, usually create new identities for the protected individuals, and commonly relocate those individuals. While it’s unlikely that AI experts will agree to a name change, comparatively smaller changes such as relocation to a new city may be on the table depending on the likelihood of being targeted. (Though surely many would resist that change, too.) If that likelihood proved great enough, the government could offer a voluntary transition to a researcher located in a rural community working remotely for a leading lab to a city with a more robust law enforcement presence that may deter would-be attackers. Relocation to a common city would likely also reduce the costs of the program (albeit at the risk of making it easier for adversaries to identify experts in a single or handful of places).

Notably, this would have to be a voluntary program to avoid running headfirst into constitutional prohibitions on government interference with freedom of movement, among several other clear concerns. 

Another approach could include subsidizing leading AI labs that bolster their own protections of employees. Labs already invest in myriad security measures, such as badging requirements, background checks, and campus security to look out for their employees. This strategy would raise far fewer civil liberties concerns and perhaps create a market for AI experts to look out for companies leading in employee protection. 

Finally, a small step toward protecting AI experts might look like a public-private partnership in which AI labs establish information sharing channels with the FBI, Secret Service, and Department of Homeland Security. This could take the form of labs facilitating their top officials to disclose planned trips abroad, worrying email messages, and suspicious activities to a trusted group of government actors. Of course, the success of this program is very much contingent on whether experts trust those actors to closely guard their sensitive information and not abuse it to achieve political or institutional aims. The tragic story of Nikolai Vavilov reveals that an expert’s own government, more so than adversaries, may represent the greatest threat to their well-being. 

Vavilov, who lived by the motto, “Life is short. We must hurry,” was a leading geneticist in the first half of the 20th century. His knowledge had immense economic and political significance at a time of widespread famines and food insecurity. Born in Russia, he traveled around the world, including the U.S., to learn more about cutting-edge agricultural practices. By 1929, the Soviet government started a record of his activities out of fears that he was conspiring with its domestic political opponents. On yet another trip in 1940, he was captured by Soviet officials. It was his last public appearance. As detailed further below, protections to experts by states must be paired with protections against excessive state action.

Another reform may come about through reforming international law. As a matter of customary international law (and codified law for parties to the 1977 Additional Protocol I to the 1949 Geneva Conventions), nations must adhere to the rule of distinction. This mandates that they “distinguish between civilian population and combatants.” The former may not be the “object of attack.”

So, do AI experts constitute civilians? That’s unclear. As with most matters of international law, the answer is highly context specific. Scientists working for or with the armed forces have been defined as combatants by the Department of Defense’s Law of War Manual. Civilians who take “a direct part in hostilities” may also qualify as targetable combatants. This is an elusive standard. As Michael Schmitt of the Lieber Institute at West Point pointed out: 

[D]irect participation determinations are necessarily contextual, typically requiring a case-by-case analysis. But case-by-case determinations need not be eschewed. On the contrary, sometimes they more precisely balance military requirements and humanitarian ends than mechanical applications of set formulae.

In contrast, a 1989 memo penned by Hays Parks reached the conclusion that targeting “civilian scientists occupying key positions in a weapons program regarded as vital to a nation’s national security or war aims” would run afoul of limitations on assassinations.

Clarity around how that balancing may come out in cases involving AI experts working on a dual-use technology for companies that do some of their work for militaries may reduce the odds of nations targeting those individuals, rather than knowing that they may be able to justify such targets under a loose standard.

Next Steps

Myriad other options surely exist. Creative solutions should be on the table. This will undeniably be an expensive endeavor, but the benefits likely exceed those costs. An inflation-adjusted estimate of the annual cost of witness protection for an individual is a little more than $80,000. While the cost of protecting individuals who refuse to remain anonymous would surely increase, it seems likely that increased protection is a wise investment from both a national security and an economic development perspective. As noted above, the market value of these experts exceeds nine figures and the expected aggregate domestic economic impact of AI is immense

A hypothetical lends further support for incurring those costs: Imagine a successful attack on a handful of AI experts meeting at one of those storied Bay Area parties. The ripple effects could be significant. Experts may think twice about going to work. Some may even leave the field. A few might leave the country and lend their expertise to a lab in a nation that places a higher priority on their security. 

This is a bleak and dark possibility but a very possible reality that we cannot shy away from addressing. Critical open questions include specifying which labs and which individuals qualify for heightened protection. It’s also important that any implemented framework operates with some degree of transparency and a clear line of accountability. The government could and should consult with the experts at the Privacy and Civil Liberties Oversight Board, for example, to design an approach that aligns with core American values and the law. Other stakeholders, especially the labs and experts themselves, should also shape this conversation. Finding the right balance between affording experts the requisite degree of security and the essential liberties and freedoms we all cherish will be no easy task but may soon be an unavoidable one. 


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
}

Subscribe to Lawfare