Cybersecurity & Tech Foreign Relations & International Law

Liberal Democracies Are Retreating From AI Safety

Jakub Kraus
Monday, August 11, 2025, 1:00 PM

The G7’s prosperity statement is emblematic of a broader shift in multilateral AI policy discussions.


Leaders and guests a the 44th G7 summit in La Malbaie, Canada. (Casa Rosada, https://shorturl.at/oYPlQ; CC BY 2.5 AR, https://creativecommons.org/licenses/by/2.5/ar/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

The word “safety” appears exactly zero times in the Group of Seven’s (G7’s) recent Statement on AI for Prosperity, which focuses heavily on the benefits and opportunities of artificial intelligence. Issued at the recent G7 Summit in Alberta, Canada, the statement begins with recognition of AI’s potential to “grow prosperity, benefit societies and address global challenges.” It then describes commitments whereby member countries promise to “promote economic prosperity,” meet AI’s energy needs, and increase access to and adoption of the technology. In an annex, an AI adoption road map outlines plans to help small and medium-sized businesses “move from uncertainty to opportunity.” The statement mostly neglects the possibility that AI could malfunction or be used harmfully.

The G7’s emphasis on AI opportunity contrasts with its more cautious tone in years past. Shortly after ChatGPT was released, G7 leaders established the Hiroshima AI Process to study generative AI, recognizing both the “opportunities and challenges” posed by the technology. Later that year, the process produced a code of conduct that outlined voluntary guidance for organizations developing advanced AI systems, such as testing for biological threats, investing in cybersecurity, and respecting intellectual property rights. The G7 continued acknowledging AI’s risks in 2024, even inviting Pope Francis to speak at the leaders’ summit in Apulia, where he called for a ban on lethal autonomous weapons.

Given this recent history, the 2025 AI prosperity statement marks a significant pivot away from earlier consideration of AI’s risks toward an almost exclusive focus on its benefits. This change is emblematic of a broader recalibration in how liberal democracies approach AI policy across many international forums, from the French AI Action Summit to the Munich Security Conference. Several forces are driving this transatlantic pivot—chief among them the Trump administration’s influence, as well as geopolitical competition, industry pressure, and AI’s growing track record of success.

The rise of AI optimism in international dialogues could help disseminate the benefits of AI globally. But it would be foolish to neglect or entirely abandon multilateral cooperation to address AI risks. In the future, more capable AI models could wreak international havoc. Criminals, autocrats, and terrorists might use AI to conduct cyberattacks, surveil citizens, or create novel pathogens. AI companion apps and AI-powered media feeds could disrupt human relationships at a societal scale. And many ambitious businesses hope to eventually build AI models and robots that can automate most jobs. If they succeed, they might spark mass unemployment and hasten the invention of new weapons of mass destruction—assuming humanity doesn’t lose control of autonomous machines altogether.

Global leaders can work together now to prepare for such outcomes. If they do nothing, they may find themselves scrambling to act under intense time pressure, without the infrastructure and shared trust necessary to facilitate an effective response.

The G7’s Evolving AI Stance

The G7’s AI policy tone over the past decade traces a full circle: Early enthusiasm left space for growing AI safety concerns, which have now been scaled back in favor of prosperity-focused messaging.

The term “artificial intelligence” first appeared in G7 documents in a 1982 address by François Mitterrand and then effectively disappeared from the record for more than three decades. This long silence ended in 2016, when a joint declaration recognized the importance of facilitating research and development (R&D) related to emerging technologies such as AI and robotics. AI enthusiasm continued the next year in a ministerial declaration describing AI’s potential to “bring immense benefits to our economies and societies,” and G7 innovation ministers issued an AI statement in 2018 that emphasized economic growth, trust and adoption, and inclusivity in AI development and deployment. Later that year, G7 leaders issued the “Charlevoix Common Vision for the Future of Artificial Intelligence,” which highlighted AI’s potential to “help address some of our most pressing challenges.”

These early assessments generally focused on AI’s benefits, but they did not ignore risks. For example, the 2017 ministerial declaration included an annex on “human centric” AI, and the 2018 Charlevoix document included a commitment from G7 leaders to encourage industry to “addres[s] issues related to accountability, assurance, liability, security, safety, gender and other biases and potential misuse.” In 2019, leaders issued a statement that AI “may have disparate effects regarding the economy and privacy and data protection, and implications for democracy.” Despite the cancellation of an in-person summit in 2020 due to the COVID-19 pandemic, G7 nations worked with several like-minded countries to establish the Global Partnership on AI (GPAI), whose 15 founding members pledged to support the “responsible and human-centric development and use of AI” while respecting the 2019 OECD Principles on AI. G7 leaders expressed explicit support for GPAI at summits in 2021 and 2022, though AI did not feature centrally at these meetings.

The release of ChatGPT made a splash in late 2022, fueling important AI governance developments throughout Japan’s 2023 G7 presidency. The year’s signature achievement was the Hiroshima AI Process, an ongoing series of dialogues focused on both the opportunities and the risks of AI. These discussions resulted in a high-level set of principles and a detailed code of conduct for organizations developing AI. A cursory glance at the code of conduct reveals a heavy emphasis on risk mitigation, such as AI safety research investment and evaluations for risks of “self-replicating” AI models.

In 2024, the focus on risks remained, but it was also paired with attention to AI’s benefits. Early in the year, a ministerial declaration emphasized both opportunities and risks of AI. At the leaders’ summit in Apulia, Pope Francis gave an address on AI’s effects on humanity and called for a ban on lethal autonomous weapons. Later, G7 ministers issued an action plan for “human-centered adoption of safe, secure, and trustworthy AI” in the workforce, which repeatedly noted risks. Other documents focused more on opportunities, like an AI and tourism policy paper—but even that analysis was tempered by several pages describing potential risks.

When leaders issued their Statement on AI for Prosperity at the 2025 leader’s summit, this cautionary tone was notably lacking. G7 leaders continued to emphasize AI opportunity but narrowed their prior emphasis on global AI risks considerably. The document’s primary concerns about AI are its effects on power grids and “risks of disruption and exclusion from today’s technological revolution.” Mentions of other potential harms, such as deepfake fraud and AI-powered biological threats, are absent. The prosperity statement then outlines how the G7 will take action to accelerate adoption, find ways to meet AI’s energy demand, and support new AI markets.

A Broader Pivot Away From Risks

The G7’s negligence of risk, far from being an aberration, is representative of a larger shift demonstrated by the trajectory of other global AI dialogues. In the wake of the safety-minded 2023 U.K. AI Safety Summit and 2024 AI Seoul Summit, Paris shifted gears with its 2025 AI Action Summit, which focused heavily on the economic opportunities unleashed by AI. The Paris summit concluded with 12-figure AI innovation investments, refusals from the U.S. and U.K. to sign a summit statement discussing AI ethics, and critiques of “hand-wringing about safety” from U.S. Vice President JD Vance—prompting one pundit to jokingly refer to it as the “Paris AI Anti-Safety Summit.”

The international network of AI Safety Institutes (AISIs) has witnessed similar trends. In early 2023, the U.K. announced 100 million pounds in funding for its new Foundation Model Taskforce, which later became the U.K. AISI. The U.S. launched its own institute in November 2023, and, over time, countries including Japan, Canada, and South Korea declared intentions to create their own institutes. In fall 2024, the U.S. announced the formation of an international AISI network that also included Australia, France, and the European Union.

After the Paris summit, however, the U.K. AISI changed its name from “AI Safety Institute” to “AI Security Institute.” One might argue this move was largely a surface-level brand makeover, but the U.K. AISI also said it would “not focus on bias or freedom of speech”—backpedaling on its 2023 pledge to study bias and misinformation. Months later, the U.S. also renamed its AISI, opting for the “Center for AI Standards and Innovation (CAISI),” which included a new focus on “guard[ing] against burdensome and unnecessary regulation of American technologies by foreign governments.”

The Munich Security Conference offers another example of the shift away from AI safety. In 2024, leading AI companies signed the AI Elections Accord, a pledge to voluntarily curb AI’s risks to global elections by advancing detection tools for AI-generated content, assessing AI models for risks, providing transparency to the public, and more. Many companies issued progress updates detailing their work on this front, but the voluntary pledge expired at the end of 2024, and the 2025 conference featured no effort to replace it. The current president of the Council on Foreign Relations articulated the trend, writing “[AI] remains a major issue at Munich, but the tone of the discussion has shifted. [...] This year, it is on the opportunities that AI offers and the Europeans’ concern that their regulatory approach might leave them further and further behind.”

NATO policies have also recently deemphasized AI risks. In 2023, NATO’s Data and Artificial Intelligence Review Board met to begin developing an AI certification standard to help ensure that new AI projects “are in line with international law, as well as NATO’s norms and values.” In 2024, NATO’s revised AI strategy included a section on risks from AI misuse, which states that “NATO must remain a proponent of responsible use behaviours, by using its convening power to influence international norms and standards.”

But that same month, NATO also made plans to create a “Rapid Adoption Action Plan” for emerging technologies. At the 2025 summit, NATO published a summary of this plan, which calls for implementing new technologies within 24 months of identifying a need. The summary emphasizes a need to “win the technology adoption race,” arguing that “some acquisition and procedural risks need to be an inherent part of innovation and rapid adoption.” This language suggests a greater tolerance for emerging technology risks, including AI risks.

Safety deemphasis is also visible in discussions on domestic AI legislation. In the U.S., both the Senate and the House explored potential AI legislation in 2023 and 2024. But in 2025, Congress considered banning most state AI laws without passing federal laws to take their place. It is telling that Colorado Gov. Jared Polis, who signed one of America’s major state AI bills into law in 2024, now supports a federal moratorium.

Similarly, in the first half of 2025, the European Commission withdrew its AI liability directive, and the prime minister of Sweden called for a pause in the rollout of EU AI Act rules. Meanwhile, the United Kingdom continued delaying its efforts to pass AI legislation, while finding time to publish an AI Opportunities Action Plan. In the Pacific, Japan passed a “Bill on Promotion of R&D and Utilization of AI-related Technologies,” and South Korea enacted legislation that some commentators described as “far more innovation-friendly compared to the EU AI Act.”

Despite extensive examples of a shift away from AI safety in 2025, risk-mitigating policy efforts have not disappeared entirely. In the U.S., states continue to enact new AI laws, such as 18 new California laws, the Texas Responsible AI Governance Act, and Utah’s mental health chatbots bill. Bodies like the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) remain active in developing voluntary AI standards, such as guidance on “red team” testing strategies. Bilateral dialogues persist, including a recent memorandum of understanding between Canada and the United Kingdom’s AISIs and the first AI dialogue between the U.K. and China. Additionally, several AISIs are actively collaborating on testing for AI threats. There are also ongoing government-backed efforts to build scientific consensus on AI risks and opportunities, including the UN’s push to build an Independent International Scientific Panel on AI, the recent Singapore Consensus on Global AI Safety Research Priorities, and the U.K.-backed International AI Safety Report. However, such examples are increasingly the exception rather than the norm. 

The Trump Effect

The U.S.’s change of administration—from Biden to Trump—is a major factor behind the shift in AI policy among liberal democracies. The U.S. holds significant sway in international AI discussions, not only because of its economic and military strength but also because of its leadership in AI.

The Biden administration led several international AI cooperation efforts that seem to have fizzled—or at least lost priority—under President Trump in 2025. According to Ben Buchanan, the former White House special adviser for AI, the G7’s code of conduct was “built heavily upon” voluntary commitments that top AI companies made to the Biden White House. The previous administration also introduced the first major UN General Assembly resolution on AI, which received support from 193 nations including China. Further, the U.S. was among the initial signatories of the Council of Europe’s global AI treaty in fall 2024, though it sought exemptions for private companies. It also promoted the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy during 2023 and 2024. Perhaps most significantly, President Biden discussed AI with Chinese President Xi Jinping multiple times, reaching agreement on limiting AI in nuclear decisions and bringing top officials together for AI talks in Geneva.

The Trump administration, by contrast, has expressed concern that AI regulation could hinder AI innovation. In Vice President Vance’s speech at the 2025 Paris AI Action Summit, he argued that restrictions on AI development would mean “paralyzing one of the most promising technologies we have seen in generations.” He also called for “international regulatory regimes that fosters the creation of AI technology, rather than strangles it.” More recently, the White House AI Action Plan featured a section titled “Remove Red Tape and Onerous Regulation,” which stated that “AI is far too important to smother in bureaucracy at this early stage.” Though the plan addresses some AI risks, it also states plainly that “too many” international AI governance efforts have “advocated for burdensome regulations.”

One of Vance’s four central points was the need to avoid “ideological bias” in AI—a phrase that has appeared repeatedly in the Trump administration’s AI statements, including a recent executive order targeting “woke AI.” This language reflects a belief, common in some Republican circles, that “AI safety” is often a euphemism for left-wing social policy. Senate Commerce Committee Chairman Ted Cruz (R-Texas), for instance, accused an AI policy nonprofit in April 2025 of promoting “so-called ‘safety’ standards that aligned with Biden-era censorship around race and gender.”

It’s true that “AI safety” sometimes refers to addressing AI’s impact on polarizing issues like misinformation; diversity, equity, and inclusion; and climate change—three concepts that the AI Action Plan recommends eliminating from the Commerce Department’s popular AI Risk Management Framework. These three topics are visible in the G7 code of conduct, and the AI Action Plan unsurprisingly criticizes “vague ‘codes of conduct’ that promote cultural agendas.” Given this aversion to certain AI safety topics, the Trump administration may be inclined to treat all discussions of AI risk with suspicion in global policy settings—even when only small parts touch on politically sensitive issues.

Another potential driver of the Trump administration’s AI stance is resistance to foreign tech regulations, especially EU regulations. Vance’s speech specifically criticized the General Data Protection Regulation (GDPR), which has imposed penalties as high as $1.3 billion for rule violations, and the newer Digital Services Act (DSA), which may issue fines against X. Days after his inauguration, President Trump called European fines against Apple, Google, and Meta “a form of taxation” while concluding his virtual address to the World Economic Forum. More recently, the Trump administration has lobbied against efforts to implement Europe’s AI Act.

The Trump administration’s pushback isn’t limited to European regulations; it has increasingly targeted digital regulations worldwide that it sees as threats to U.S. technology interests. For example, Trump issued an executive order in February describing plans to use tariffs and other countermeasures against foreign actions—such as digital service taxes—that harm or discriminate against U.S. companies. The next month, shortly before Trump’s “Liberation Day” tariff announcement, U.S. Trade Representative Jamieson Greer issued the annual National Trade Estimate Report on Foreign Trade Barriers (NTE), which drew praise from tech industry advocates and criticism from consumer advocates for expanding its focus on digital trade barriers relative to the 2024 NTE. Ambassador Greer elaborated on the administration’s stance during a U.S. House hearing, arguing that “in no case can we allow discrimination to undermine our competitive advantage” in digital tech. Given this context, the administration appears likely to resist international AI dialogues with regulatory undertones, viewing them as another potential foreign constraint on U.S. tech companies.

Part of the fuel behind this focus on competitive advantage is the Trump administration’s ambition to compete with China. U.S.-China tech competition has been a prominent narrative in Washington for years, and its implications for AI have long sparked discussions of an AI arms race or even an AI Cold War—but such sentiments seem to have intensified recently. Google Trends data shows that U.S. searches for “AI race” rose significantly after the release of ChatGPT in late 2022. Last fall, the U.S.-China Economic and Security Review Commission recommended that Congress establish a Manhattan Project-like program for “racing to and acquiring” AI capabilities that “usurp the sharpest human minds at every task.” In January, DeepSeek’s R1 model sparked record losses in Nvidia stock, leading one influential Trump donor to declare that “DeepSeek-R1 is AI’s Sputnik moment.” At July’s “Winning the AI Race” event in Washington, Trump claimed the U.S. “will be adding at least as much electric capacity as China” and then signed an executive order that expedites environmental permitting for AI data centers. The Biden administration expanded chip export controls to curb Chinese AI, but the Trump administration has arguably inherited a geopolitical atmosphere with even stronger incentives to innovate at all costs. 

In addition to these considerations, President Trump has shown a general distrust of conventional multilateralism and a willingness to buck foreign policy traditions. In his first term, Trump withdrew or moved to withdraw from the Paris climate agreement, the Iran nuclear deal, the Trans-Pacific Partnership, the World Health Organization, the UN Human Rights Council, the Open Skies Treaty, and more. G7 summits were unsurprisingly tense. Subsequently, during the first six months of Trump’s second term, his administration disrupted U.S. foreign aid, twice paused military support for Ukraine, issued global tariffs, discussed annexing Canada and Greenland, suspended Pentagon participation in the Halifax International Security Forum, and withdrew most officials from the Aspen Security Forum—accusing it of promoting “the evil of globalism.” Thus, foreign leaders seeking to soothe diplomatic tensions may be more hesitant to push for AI policies that diverge from the Trump administration’s priorities.

Deeper Reasons for the Shift

Though the Trump administration may play a significant role, it cannot singlehandedly explain the growing dismissal of AI safety. Consider France, which began planning its AI Action Summit and questioning Europe’s AI Liability Directive well before Trump was elected. In late 2023, France joined forces with Germany and Italy to oppose general-purpose AI regulations in the EU AI Act, including a joint paper arguing for voluntary self-regulation so that European players could “emerge and carry our voice and values in the global race of AI.” Just 18 months prior, it was France that had proposed expanding the AI Act to regulate general-purpose models. Trump’s election in 2024 cannot explain France’s volte-face, which predates the president’s second term.

The AI policy shift during this period coincides neatly with the release of ChatGPT, which OpenAI made public shortly after Thanksgiving 2022. Within four months, Pew polling found that 58 percent of U.S. adults had heard about the product; Nvidia’s CEO called it “the AI heard around the world”; OpenAI released an even more powerful model; and a prominent letter called for a pause on training systems more capable than that. By late May, OpenAI’s CEO was testifying before Congress and warning of human extinction. Loosely speaking, the world “woke up” to AI in early 2023, and some people felt the rapid pace of progress was frightening.

Meanwhile, a separate sentiment was brewing in the AI industry: fear of missing out. In the months following ChatGPT, Google management declared a “code red,” Microsoft invested $10 billion in OpenAI, Meta and Anthropic released significant AI models, and Elon Musk began planning his own AI company. Across the Atlantic, the French AI startup Mistral raised 105 million euros in June 2023.

When viewed in connection to Mistral, France’s turn away from general-purpose AI regulations is easier to understand. Since his 2017 election, President Emmanuel Macron has for years sought to support startups and tech innovations. Further, one of Mistral’s co-founders, Cédric O, served as France’s secretary of state for digital affairs, and he was a founding member of Macron’s political party. O advocated for lighter-touch EU rules in 2023, warning that if compliance costs were too high, the AI Act “could kill Mistral.” Various forms of industry lobbying, including from much wealthier tech companies than Mistral, have likely played a role in shifting the global AI policy discourse away from risks. 

Lobbying efforts have benefited from growing evidence of AI’s profitability and tangible benefits. OpenAI claims to have surpassed $10 billion in annual recurring revenue, and Anthropic is reportedly generating over $333 million in revenue per month. Research on AI in the workforce suggests that the technology is growing in popularity and can produce noticeable productivity gains. Perhaps for these reasons, Ipsos polling data suggests that optimism about the net benefits of individual AI products has risen from 2022 to 2024 in several countries, such as France (+10 percent), Germany (+10 percent), and the United States (+4 percent). (That said, other polling suggests that some concerns about AI remain high in the U.S.)

Besides boosting economies, governments are noticing AI’s potential to strengthen militaries. The Russia-Ukraine War has seen useful AI applications in drone warfare. Most notably, Ukraine claims it inflicted $7 billion of damage on Russian military aircraft in an unprecedented Trojan horse-style attack in June, using an AI-assisted drone fleet worth less than $250,000. And AI shows promise in other domains besides airspace, such as land, naval, space, and cyber operations. Anthropic, Google, OpenAI, and xAI recently secured U.S. defense contracts that could bring in up to $200 million for each company, and other military AI projects are underway in countries like Greece, Denmark, and China.

The prospect of AI-enabled economic and military advantage has prompted stronger geopolitical competition and pushes for “sovereign AI.” Thus far, 2025 has seen efforts in France, the EU, China, and the U.S. to mobilize hundreds of billions of dollars for AI-related projects in the coming years. However, the current best general-purpose AI models mostly come from companies based in the U.S. and China. And this may remain true in the future, as the U.S. currently contains roughly 75 percent of global AI supercomputer performance, while China trails in second with 15 percent, according to Epoch AI. Therefore, some liberal democracies may be reluctant to prioritize AI safety, fearing that precautionary policies would leave them even further behind the U.S. and China.

Implications for the World’s AI Future

There are certainly benefits to deemphasizing safety. Liberal democracies’ pivot toward AI opportunity—unhindered by concerns over safety—could help facilitate the technology’s rapid development, transforming areas like health care, automotive safety, scientific discovery, accessibility, and agriculture. Furthermore, avoiding especially burdensome regulations could help democracies stay competitive with China—though one might counter that establishing basic safeguards could lower the risk of Chernobyl-style disasters that undermine public trust.

Another potential benefit of slowing regulatory dialogues is that future regulation can be tailored to the technology in its more advanced state. It remains uncertain how AI will evolve and whether all of today’s policy proposals will hold up as systems progress. For instance, mandating a single round of testing for AI models that surpass a compute threshold presumes a clear-cut separation between training and deployment, which may break down if future AI systems are able to continue improving during real-world operation. Taking a cautious approach to international governance reduces the risk of enacting premature regulations and allows time to design frameworks that are more robust and future-proof.

If, however, future AI threats to international security materialize, governments may find themselves lacking the institutions, infrastructure, consensus, and time needed for an effective response. Unlike nuclear technology with the International Atomic Energy Agency (IAEA) or particle physics with the European Organization for Nuclear Research (CERN), AI lacks established international institutions. Further, the world currently lacks reliable verification technology and protocols to ensure that major AI powers comply with potential commitments. Building consensus could become more difficult as AI grows more powerful, and if key nations view certain mitigations as unnecessary and refuse to join agreements, then those accords might lose much of their value. Perhaps most challenging, crafting effective multilateral policies among nations with different cultures and interests could take years of effort, and time may be scarce if AI capabilities are advancing rapidly. New global AI risks are already emerging: Anthropic recently elevated its safeguards due to concerns about AI-enabled biological threats, and malicious actors have attempted to deceive U.S. officials with AI-generated media—which is rapidly growing more realistic.

These developments suggest the window for preparation is growing thin. As the world considers how to balance caution with optimism, the shift away from safety may represent AI policy’s most consequential gamble to date.


Jakub Kraus is a Tarbell Fellow writing about artificial intelligence. He previously worked at the Center for AI Policy, where he wrote the AI Policy Weekly newsletter and hosted a podcast featuring discussions with experts on AI advancements, impacts, and governance
}

Subscribe to Lawfare