Cybersecurity & Tech

The Same Old Fantasies Behind AI and New Technology

Henry Farrell
Friday, June 13, 2025, 8:00 AM
A review of Adam Becker, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity” (Basic Books, 2025).
Silicon Valley from above. (Patrick Nouhailler, https://www.flickr.com/photos/patrick_nouhailler/8666949563, CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Adam Becker’s “More Everything Forever” begins by describing the ideas of Eliezer Yudkowsky, an AI guru who Sam Altman thinks deserves a Nobel Prize. Yudkowsky’s ambitions for humanity include “[p]erfect health, immortality,” and a future in which “[i]f you imagine something that’s worse than mansions with robotic servants for everyone, you are not being ambitious enough.” According to Yudkowsky and his peers, a “glorious transhumanist future” awaits us if we get AI right, although we face extinction if we get it wrong.

“AI” and “transhumanist” are new terms for rather older ambitions. As the seedy occultist Dr. Trelawney remarks in Anthony Powell’s 1962 novel, “The Kindly Ones,” “[t]o be forever rich, forever young, never to die … Such was in every age the dream of the alchemist.” Renaissance alchemists won the support of monarchs like Rudolf II, the Holy Roman Emperor who squandered his realm’s money on a futile quest to discover the Philosopher’s Stone. Now, as Becker explains, AGI, or artificial general intelligence, has become the means through which philosophers might transubstantiate our mundane reality into a realm in which the apparently impossible becomes possible: living forever, raising the dead, and remaking the universe in the shape of humanity. 

These ideas would be a curiosity, if they weren’t reshaping the world, and policymakers’ understanding of national security. Our epoch is quite as strange as Rudolf II’s Prague. Like a John Crowley novel, it has its own deathless golems and wizards who hope to speak to divine beings through a medium. In Ezra Klein’s description, AI’s coders see themselves as casting spells of summoning, even if they are not sure what lurks on the other side of the portal. 

Just as centuries ago, rulers listen to them. The Biden administration bet Americans’ national security on the proposition that AGI was right around the corner, while the Trump administration and its allies in the Gulf seem to believe that AI will help make a world where they will be in charge.

Becker’s excellent and lively book is not about AI as a working technology. It has little to say about the combinations of machine learning and “neural networks” (statistical processing engines that loosely resemble systems of neurons) that, for example, are used to simulate protein folding and complex weather systems. Instead, it is about the idea of AI and other closely related ideas. If it sometimes feels as though we live in a dark self-ramifying fairy tale, it is because the often mundane realities of AI have become interwoven with a set of fantastical notions that long predate the working technologies we have today.

There are many books that will help you understand how AI technology works in practice but few that even begin to describe how it works as myth. If you want to understand why many AI leaders believe that their technology will heal the wounds of mortality, grant us nearly limitless abundance, and allow us to spread into the galaxy, this is the book you ought to read.

Becker treats the aspiring magi of AI seriously, while arguing that they are terribly wrong. Long before models like ChatGPT, gurus like Yudkowsky, the recently deceased computer scientist and science fiction writer Vernor Vinge, and the futurist Ray Kurzweil, created a loosely shared vision of what AI might do. They hoped for a future where humanity might use AI to turn stars and even galaxies into resources to be used for human purposes, or feared one in which humanity might become the resources to be burned up and discarded by feral AI that transcended its creators and developed its own goals. The result, Becker says, is a confidence that technology can cure all ills, and a singular vision of an “immortal future in space,” with limitless energy, limitless time, and unbounded resources. The truth, he says, is that it is through acknowledgment of our physical limitations that we can understand the actual variety of political possibilities that we have, as we are forced to confront the difficult problems that the dreamers might prefer to wish away.

Their ideas were spun out of a mixture of analytic philosophy and science fiction, creating a vision that is as much a loose community as a way of thought. As Becker says, this ideology of the future is “sprawling and ill defined,” and its animating culture centers on the Bay Area. “Longtermists,” “rationalists,” “advocates of the Singularity”—all argue together ceaselessly on the internet and at physical gatherings, concocting heady brews of commingled ideas that have shaped our understanding of AI.

Longtermism is a particular flavor of “effective altruism,” which is itself an offshoot of utilitarian analytic philosophy, the idea that we should aim to promote the greatest good of the greatest number of people. Effective altruists began by arguing that we should favor the interests of faraway people as much as those who were close to home, and by trying to measure the effectiveness of philanthropic interventions. Dull-seeming measures such as providing mosquito netting might save more lives than flashier and more expensive solutions to superficially bigger problems. Over time, however, some prominent philosophers of effective altruism such as Nick Bostrom, Toby Ord, and Will MacAskill began to shift toward grander ambitions. What if we started valuing the lives of future people just as much as those who are living right now? And what if there were possible futures in which humanity might spread across the stars, or upload copied humans into virtual environments, perhaps enabling quadrillions of descendants? Shouldn’t we be doing everything possible to ensure that those futures happen, even if it meant that we neglected current problems such as climate change? According to Ord and MacAskill, a hotter Earth might be bad, but it would be a minor speed bump in future human history, compared to the promises and challenges of AI and similar technologies.

These ideas meshed well with rationalism, an online movement that flourished on websites such as LessWrong, Overcoming Bias, and Slate Star Codex. Rationalists were devoted to improving individual human reasoning, through the use of Bayes’ Theorem (a means of adjusting the probabilities of different hypotheses given evidence), game theory, and other mathematical tools. Prominent rationalists like Yudkowsky were obsessed with the possibility that AI might be able to reason better than human beings. Could rationalism come up with techniques to constrain future AIs, to make sure that their self-interest aligned with the interests of human beings? Initially, Yudkowsky and others were optimistic, but they grew increasingly pessimistic over time, fearing that AIs would better manipulate humans than the reverse. One notorious rationalist thought experiment, Roko’s Basilisk, laid out a convoluted logic under which people who were aware of the possibility of future AI, and did not do everything they could to bring it about, would be damned to eternal torment once AI took over.

All found inspiration in Vernor Vinge’s idea that humanity was on the verge of a “Singularity,” a point at which its fate would change. Once humans developed AI systems that were smarter than human beings (in other words, AGI), those systems could work to make themselves even smarter, and that still-smarter form of AI would make itself smarter again in a feedback loop that could transform the human condition over the course of an afternoon. A glorious future awaited us so long as AI was subservient to human needs. If it was not, humanity might accidentally be wiped out as machine intelligence apotheosized. Kurzweil popularized such ideas in a 2005 book that predicted the Singularity would take place in the next few decades. These various ideas merged, as Becker describes it, into the “ideology of technological salvation,” the notion that we should just “align the AI, avert the apocalypse, and technology will handle the rest.” 

This heady concoction of ideas helped inspire the development of the so-called frontier models that have reshaped thinking about AI over the past four years. OpenAI, which is headed up by Sam Altman, was founded by rationalists and longtermists who believed that the Singularity was nigh, and persuaded rich people in Silicon Valley that they needed to do something about it. Commercial interests couldn’t be trusted to get it right: Hence OpenAI, which created the GPT series of models, was structured as a nonprofit under the control of a board of people focused on AI alignment. 

This arrangement was supposedly dedicated to ensuring that AI would be safe and aligned with the broad interests of the human species, but the arguments began almost immediately. Elon Musk, one of OpenAI’s most crucial early backers, went up against Altman and lost. Then, people in this small community began to figure out that large language models, an approach to AI that shrunk the vast corpuses of human-generated text available on the internet and elsewhere into relatively compact sets of statistical weights, could respond to prompts in ways that seemed to resemble ordinary human conversation. Improving these models involved “scaling”—deploying expensive specialized semiconductors on expensive training runs to process ever vaster amounts of data. That might mean that AGI and the Singularity were close at hand—but getting there would cost enormous amounts of money from commercial backers. The resulting tensions led to further drama, in which some key OpenAI people left to found a rival, Anthropic, which was supposed to take AI risk more seriously, and in which skeptics on OpenAI’s board fired Sam Altman for reported lack of candor—but found themselves replaced when Altman returned with Microsoft’s backing.

Transhumanist ambition and alarm about out-of-control AI allow labs like OpenAI and Anthropic to claim they are shaping a better future for us all, and warding off a worse one. Dario Amodei, the CEO of Anthropic, warns of the havoc that AI could wreak if it is not controlled. He also hopes that AI will soon become a kind of “pure intelligence” that is “smarter than a Nobel Prize winner across most relevant fields,” a whole “country of geniuses in a data center,” dedicated to discovery. Just as the Philosopher’s Stone could transmute substances, heal diseases, and create an elixir of longevity, Amodei predicts that AI will soon transform material science, cure most forms of cancer, and double the human lifespan.

As Becker emphasizes, these extraordinary visions of the future of humanity are not inherently stupid. Strange possibilities may emerge from big changes that happen at scale. It is possible that without these visions, the initial breakthroughs would never have happened. The sociologist Max Weber inquired in the early 20th century into how people ever acquired the capital that allowed the modern economy to get going. He famously concluded that Calvinists’ fear of eternal damnation was an initial motor driving the work ethic essential to accumulation of investment capital, an essential foundation for the development of the modern economy in the West. Perhaps the irrationalism that drives the rationalizations of rationalists—the alchemical quest for eternal life and breeding-gold—similarly spurred the creation of models that would otherwise have seemed speculative and unaffordable. Perhaps not.

Either way, debates about AI have largely been captured by a monocultural community of rationalists, longtermists, and Singularity speculators who developed their ideas and ambitions long before they built the models. The people in this community are very far from stupid, and many of them, likely the great majority, are not unusually hypocritical either. It may be difficult to get a founder to understand criticism, when their next funding round depends on them not understanding it, but ordinary engineers and thinkers and writers face fewer such temptations. 

Still, the end result is that our current debates on AI are based on speculative depictions of speculative futures, which were largely developed by Vinge in the 1990s and Kurzweil and Yudkowsky in the 2000s on the basis of still earlier ideas (Becker describes the stories of Isaac Asimov and other writers from science fiction’s “Golden Age” between the 1930s and 1960s). These debates revolve around enthusiastic claims about how the capacities of actually existing AI (a set of powerful statistical and predictive techniques) might evolve to match the notional AI described in thought experiments from analytic philosophy, and ideas borrowed from science fiction and fantasy (Yudkowsky’s magnum opus is the 660,000-word fanfiction “Harry Potter and the Methods of Rationality”). 

Such ideas have spread far beyond their original community. Shazeda Ahmed and colleagues find that effective altruists have built up the field of AI safety around their preoccupation with “aligning” AI, so that it does not turn against its creators. The “labs,” and the people in them, can shape debate, not just because they have ordinary influence, but because they have a near monopoly on the data about what cutting-edge AI can or cannot do. A recent paper by Ilan Strauss and others suggests that research by the labs dominates academic citations: Google DeepMind alone has more citations to its papers than the four most influential universities combined.

That helps explain how the labs’ ideas spilled over into policy. Becker has much more to say about Silicon Valley than about Washington, D.C. He devotes a page or so to discussing how the ideology of technological salvation has infused well-funded D.C. think tanks, but that is not his main focus. Even so, anyone who is engaged in national security conversations will recognize the remarkable influence of the ideas that Becker describes, and how they have cross-fertilized with fear of China to bring the debate to a new stage. Shortly before he stepped down as national security adviser, Jake Sullivan called Axios with a “catastrophic” warning of what it would mean if China, rather than America, was able to control the “potentially god-like powers” of AI. From off-the-record conversations I have had, it is clear that prominent people in the Biden administration anticipated that something loosely resembling Vinge’s Singularity—the moment when AI would be able to improve itself in a continuing feedback loop—was close at hand. This was the main reason why America denied China access to the most advanced semiconductors used to train AI. It wanted to ensure that the transition to AGI took place on America’s terms, foreclosing an AGI revolution with Chinese characteristics. If only America could keep ahead for a couple of years, American AI would reach take-off, providing the U.S. with an enduring strategic advantage. One of the Biden administration’s last policy initiatives was a doomed effort to use its control of global semiconductor production to divide the world into three zones, with different levels of access to chips and cutting-edge AI models. These policy ideas have fed back into the labs, as founders like Amodei co-author op-eds warning of the dangers of Chinese AI, while Altman warns that the U.S. is “barely ahead” of China.

The Biden administration’s policies rested on risky bets. AI may not be the universal solvent for vexing and complex technological problems promised by Altman, Amodei, and their peers. Some (including most members of the Association for the Advancement of Artificial Intelligence, as well as myself and colleagues) believe that current approaches will not lead to anything like self-improving AGI anytime soon. As DeepSeek’s success in building AI models suggests, semiconductors may not provide the chokehold on AI that American policymakers believed.

In any event, the Trump administration has seemingly opted for a different approach, replacing a grand plan for a global AI order with a loose approach based on the ever-shifting convergence between notions of American greatness and the particular interests of well-connected companies. This currently involves leveraging control of technology to maintain American domination, cutting deals with Gulf states, while trying to cut China out. When Trump toured the Gulf states with an entourage of tech CEOs, he offered access to top-end semiconductors in exchange for promises of massive investment, graciously accepting a jet plane, and a large-scale money transfer routed, with fees, through his family stablecoin. 

This emerging blend of imperium, magical technology, and self-dealing too is reminiscent of the Renaissance. The English wizard John Dee, who claimed to have turned mercury into gold in the presence of Rudolf II, is also credited with coining the term “British Empire.” As a member of the court of Queen Elizabeth I, he combined alchemical experiment and indirect conversation with angels with more mundane plans to extend British power across the Atlantic and enrich himself.

The alchemists’ quest for immortality and universal transformation is again entangled with the struggle among empires. The dreams of perfect health, the conquest of death, and mansions with robot servants that Becker describes inspire visions of a radically transformed far future and a near present in which America can use advanced technologies to bolster its power. Such visions stem from an ideology of salvation that has indeed fueled important technological advances but may be quite wrong about where those advances will bring us. 

We are being ushered into a world where AI models offer new and useful means to summarize the information of the world and draw nonobvious connections across it, but can also weave new syncretic religious beliefs together from the myriad human sources that they summarize. In the early 20th century, Weber argued that rationalized Calvinism had led to the defeat of the old gods and the inexorable disenchantment of a world dominated by bureaucracy and organized capitalism. In the early 21st, just the opposite is happening as AI draws open the gate for the return of the fantastical. Yudkowsky’s eschatology is just one of the prophetic visions attendant on AI. The practical changes that AI will bring to bureaucracy and markets will go hand in hand with religious ecstasy and fervor, as some people come to believe that they can speak to God through the machine, while others use it to elaborate their own extended fantasies about politics. As Becker’s book shows, technology is not just an abstract force changing the world, but a mirror in which human beings see their dreams and ambitions made manifest. We’re starting to discover what that looks like at scale.


Henry Farrell is the Stavros Niarchos Foundation Agora Institute Professor of International Affairs at the Johns Hopkins School of Advanced International Studies. He is author of “Underground Empire: How America Weaponized the World Economy” (with Abraham Newman, 2022), “Of Privacy and Power: The Transatlantic Fight Over Freedom and Security” (with Abraham Newman, 2019), and “The Political Economy of Trust: Interests, Institutions and Inter-Firm Cooperation” (2009).
}

Subscribe to Lawfare