Intelligence

Myth of the AI Oracle

Joel Brenner
Sunday, April 5, 2026, 9:00 AM
Even the most capable AI will face limits on its ability to make predictions and substitute for strategic decision-making.
"Much of the current discussion of AI implies that the technology is a Single Thing, as if it were a great oracle atop a latter-day Mt. Sinai. It isn’t." Discarded monitors in East Wenatchee, Washington, March 13, 2013. Thayne Tuason/Wikimedia Commons

Editor’s Note: Artificial intelligence is increasingly used in the national security community, with applications as varied as logistics and targeting. MIT's Joel Brenner contends, however, that AI will not be able to solve the problem of intelligence surprises, arguing that data limits and the nature of AI reasoning inevitably will lead to gaps. 

Daniel Byman

***

The astounding ability of artificial intelligence (AI) to produce plausibly human work and to radically improve enterprise administration and military tactics risks creating a seductive belief in its ability, acting alone, to beat human judgment in strategic decision-making. This is a dangerous illusion. The more consequential a decision, the more humans will not want machines alone to make it. As I note in a recent piece in International Security, this is especially true when it comes to avoiding intelligence surprises.

AI is capable of perfect recall, produces excellent summaries of available information, makes situational assessments almost instantly, and accelerates decision-making. It does so by expanding human capability in three dimensions: speed, scale, and complexity. But there are severe practical limits on its ability to make predictive judgments in complex environments. There are also equally severe limitations on the ability of computers of any level of power to do so. Understanding the reasons for these limits makes it less likely that users of this technology (or rather, technologies) will delude themselves about what it can and cannot do.

The first practical obstacle is that AI systems don’t always have the data they need, even if they are awash in data they don’t need. For example, collection against North Korea and other hard targets remains difficult. Corrupted data, legal limitations on data, and the U.S. government’s classification barriers create further obstacles to data omniscience. The problem of systematically created false information is an especially serious problem—and AI is worsening that problem, not solving it, through massively generated disinformation and deepfakes.

The same considerations limit AI training data. AI developers are already complaining about “data starvation.” To some degree, the data starvation can be overcome by synthetic data, which is algorithmically created data that mimics the statistical properties of real data. Synthetic data compromises no one’s privacy because it does not use real cases; it expands simulated learning opportunities, such as training self-driving cars; and it can reduce biases or imbalances in a dataset. But it mimics historical data.

Basing predictions on historical data is dangerous. During the Cuban missile crisis, the United States did not fly U-2s over Cuba for 38 days owing to a political disagreement about the dangers of those flights. Lacking photographic evidence, CIA analysts wrongly concluded that the Soviets had not placed offensive missiles in Cuba because they had never done anything like that before. That’s precisely the kind of error that a system trained only on historical data is inclined to make.

Nor can AI overcome the problem of closed minds. The certainty in the mind of Israel’s military intelligence chief that Egypt would not, could not risk crossing the Suez Canal in 1973, despite ample evidence that it intended to do so, is a case in point. President Trump’s certainty that the Iranian government would collapse before it could close the Strait of Hormuz is a fresher example. Lack of imagination creates similar difficulties. In 1941, high-ranking U.S. military officers refused to run a wargame involving an attack on Pearl Harbor because they thought the scenario too improbable to consider. AI cannot solve problems of this order.

The Pearl Harbor attack presents a case of Handel’s paradox: “The greater the risk, the less likely it seems to be, and the less risky it actually becomes. Thus, the greater the risk, the smaller it becomes.” AI can be trained to consider that paradox but cannot make it disappear.

Much of the current discussion of AI implies that the technology is a Single Thing, as if it were a great oracle atop a latter-day Mt. Sinai. It isn’t. AI systems are already competing within our government and in the private sector, and against adversarial systems—and that will continue. AI will therefore supercharge predictive competition. Consensus is unlikely. And if, by chance, competition did produce consensus within any government, it would enhance the likelihood of surprise.

The ability to predict the future is not constrained merely by practical challenges, however. That ability will remain flummoxed because the future is not determined; it is contingent. Any system in a given state can produce multiple future states. Consequently, even in a laboratory, the ability to predict is inherently subject to error. Observational error, imprecision, and incompleteness create further forecasting difficulty. So said the author of the famous “butterfly effect” in weather forecasting, and it led him to conclude that “precise, very-long-range forecasting would seem to be nonexistent.” Weather forecasting will continue to improve, and near-term forecasts have become impressively accurate, but weather forecasters will never be able to provide a definite statement of the future. The farther out one peers, the murkier the picture.

Sociopolitical predicting is anyway fundamentally more complex than weather forecasting. Not only do sociopolitical predictions occur in an unbounded or loosely bounded universe; they also involve a feedback loop. Carrying an umbrella won’t keep it from raining, but governments and enterprises do react to one another’s moves and adjust accordingly. Social and political prediction will therefore remain less accurate than weather forecasting. Of course, it’s valuable to improve prediction by hours and days and thus to lengthen warning times, but warnings will remain probabilistic and subject to error.

But how about artificial general intelligence (AGI)—that is, AI capable of human thought and emotion but more powerful than any human brain? Won’t it solve this problem? No, it won’t. AGI will exacerbate the machine’s hidden quirks and biases precisely because it will, by definition, be humanlike. “Quirks” and “biases” are merely words to describe the tendencies of every informational framework, human or otherwise.

AGI, meanwhile, will have become wonderfully persuasive. Even in its present state, AI can induce mental instability, change people’s political views, and persuade them to commit suicide. Why should humans expect that a machine devoted to national security decision-making will behave differently? Such a machine will have a point of view and will defend it, and a machine with a point of view can be surprised. When humans succeed in creating such a machine, we will not have eliminated surprise; we will merely have washed our hands of the problem in the delusional hope that a machine will make it disappear.

There are, of course, a great many things that AI can do and is already doing wonderfully well. Its total recall, ability to organize and summarize vast amounts of data, and generate plausibly human text are valuable tools. It performs administrative tasks such as supply chain analysis and contract management with astounding skill and speed. It has transformed battlefield tactics and targeting. And, owing to its exquisite sensitivity analysis, it can enhance defensive planning by determining how to optimally deploy finite resources against possible attack. My goal here is not to question the necessity of adopting AI as rapidly as possible but simply to inject a prudent skepticism about its limitations.

AI is an increasingly sophisticated probability generator, peering into an indeterminate future that is not amenable to probabilistic quantification and that can be trained only on past events. And unlike in a casino, where the odds are bounded by the number of positions on a roulette wheel or cards in a six-deck shoe, machines will ever be faced with the chaotic interaction of social, psychological, and political factors, not to mention the personality quirks of the decision-makers who must act, or not act, on what they are told. AI is an extraordinary tool—but it’s not a genie to which humans can delegate judgment.


Topics:
Joel Brenner is a senior research fellow and lecturer in MIT’s Security Studies Program. He was the head of U.S. counterintelligence policy under the first three directors of national intelligence and is a former inspector general and senior counsel of the National Security Agency.
}

Subscribe to Lawfare