Cybersecurity & Tech Democracy & Elections Foreign Relations & International Law

China’s AI Governance Ambitions and Their Implications for Free Expression

Jordi Calvet-Bademunt, Jacob Mchangama
Monday, December 22, 2025, 1:00 PM
China is exporting its AI governance model; democracies must act now or risk letting others define the future of speech.
The Former National Congress of the Communist Party of China (Dong Fang for VOA China, https://tinyurl.com/mteyzmnw. Public Domain.)

Published by The Lawfare Institute
in Cooperation With
Brookings

“China is going to win the AI race,” Nvidia CEO Jensen Huang warned recently. In a follow-up statement released shortly after making that bold proclamation, Huang softened his tone, noting that China was only “nanoseconds behind America in AI.”

Whether apocalyptic or cautious, Huang’s statements echoed across Silicon Valley and Washington, prompting the following motivation for artificial intelligence (AI) investors and policymakers: The United States must outpace China in the AI race by attracting talent, securing computing power, and shaping the global AI stack.

While headlines fixate on who will build the most powerful models, China has already surged ahead in a different and consequential contest: the race to shape global AI governance. On this front, Beijing’s agenda raises significant concerns about freedom of expression around the world. This push echoes China’s years-long effort to influence international technical standards and promote authoritarian internet governance objectives.

On July 26, China released its Global AI Governance Action Plan, an ambitious road map that—in tandem with its other proposals such as the World Artificial Intelligence Cooperation Organization—seeks to position Beijing as the architect of international AI rules. On the surface, the plan’s language sounds well-intentioned. According to Beijing, AI should be a “public good for the international community,” governed in the name of “safety” and shared benefit.

However, democracies have many reasons to be wary of China’s efforts to shape global AI governance. China already maintains a system of strict online censorship through its Great Firewall, and those efforts are being replicated in the AI domain by constructing a regime of anticipatory censorship and subordinating information technologies to state ideology. One only needs to use DeepSeek, a leading Chinese model, to experience firsthand its unwillingness to engage with topics deemed sensitive by the state.

In October—in partnership with local experts, we co-authored a comprehensive six-country analysis that demonstrates just how stark the divide between open and censorious AI governance has become. China ranks last among major AI powers in terms of policies protecting freedom of expression in AI—far behind the United States, the European Union, Brazil, South Korea, and India. Our findings reflect what any user of Chinese generative AI models already knows: The Chinese Communist Party is shaping AI in the country to reflect its political priorities and societal constraints. China’s ambitions for global AI governance should be understood and evaluated against that backdrop.

Censorship by Design

Instead of a single comprehensive AI law, like the AI Act adopted in the European Union, China has constructed a dense web of administrative rules and platform-level obligations that constitute a de facto governance regime. Notably, China’s primary regulation governing generative AI—entitled Interim Measures for the Management of Generative AI Services—requires providers to uphold “core socialist values.” This includes restricting content that may subvert state power, incite secession, or disrupt economic and social order.

The Interim Measures for the Management of Generative AI Services also mandate that training data adhere to strict ideological standards, officially promoting “truth,” “accuracy,” and “objectivity,” while simultaneously requiring that training data not result in models that challenge the existing order, harm China’s image, generate “harmful” information, or contravene social mores, ethics, or morality. As legal expert Ge Chen, who authored the China chapter of our report, notes, this effectively means that data the Communist Party deems politically sensitive or ideologically nonconforming must be excluded from training models. These requirements are reinforced by vague legal concepts in China such as “social order” and “social morality,” which the administration and courts interpret broadly and enable expansive discretionary enforcement.

These rules—along with the misuse of copyright provisions, which Chinese authorities deploy not only to police rights violations but also to suppress politically inconvenient speech embedded in user-generated media—result in a system of censorship by design. This system is reinforced by sweeping national laws, including the National Security Law and the Cybersecurity Law. As Chen notes, these laws broadly criminalize expression that the Chinese government deems could disrupt the Communist Party’s monopoly on power, subvert or incite the subversion of state authority, overthrow the socialist system, incite secession, or undermine national unity.

When democracies consider China’s proposals for global AI governance, including its Global AI Governance Action Plan, they should carefully assess China’s efforts to export the anti-democratic values. Rather than accepting China’s proposals, democracies should put forward an alternative for AI governance centered on the protection of fundamental rights and freedom of expression.

Yet democracies have so far failed to articulate a compelling vision of what the rights to free speech and access to information should mean in the AI era, and how to shape global AI governance accordingly. Too often, they either sideline free expression in their regulatory frameworks or become mired in domestic culture wars.

Democracies Are Faltering

Unfortunately, China’s push comes at a moment when commitment to freedom of expression in leading democracies is uneven and increasingly uncertain. Our six-country comparative analysis and ranking identified and assessed legislation and policies applicable to AI, ranging from AI frameworks, to rules on copyright, defamation, disinformation, hate speech, or explicit content.

With a First Amendment that provides the world’s strongest safeguards for free speech, our research ranks the United States as the most speech-protective jurisdiction for AI. But this strength is undermined by two destabilizing trends.

First, in the absence of a coherent federal framework, states are rushing to adopt their own rules, creating a patchwork of AI governance from coast to coast, sometimes at the expense of free speech. Notably, several states, including California, Minnesota, and Texas, have adopted laws restricting political deepfakes, raising serious questions regarding core constitutionally protected speech. A federal judge already struck down California’s law on First Amendment grounds, finding that, “Just as the government may not dictate the canon of comedy, California cannot pre-emptively sterilize political content.” Experts believe a similar outcome is likely with respect to Minnesota’s law, indicating that these provisions—and others like them—are unlikely to survive judicial scrutiny.

Other laws, such as the Colorado Artificial Intelligence Act, also risk chilling legitimate speech. The law—which has yet to be implemented—exempts from its duty-of-care requirements chatbots that communicate with “consumers in natural language for the purpose of providing users with information” and that are “subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.” This definition includes services such as OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude. However, the accepted use policy is not further defined and does not specify what “harmful” means. Because of this ambiguity, companies could interpret the term broadly, which would thus lead to excessive censorship of protected speech.

What’s more, the politicized war over “woke AI” has transformed legitimate concerns about bias into a culture-war proxy battle, which could have implications for free expression. One of the objectives of the White House’s AI Action Plan includes ensuring that AI protects free speech. Perhaps reflecting the shifting political landscape in the United States—from the Biden administration’s safety-focused approach to the Trump administration’s deregulatory stance—two industry leaders—OpenAI and Anthropic—have recently published work on political bias in their products. The White House’s actions, however, should be understood in the context of an administration that has recurrently claimed to promote free speech while adopting actions that restrict it.

This tension is best illustrated by the White House’s executive order entitled “Preventing Woke AI in the Federal Government,” which requires AI models purchased by the government to adhere to “truth-seeking” and “ideological neutrality” principles. According to the order, truth-seeking requires models to “be truthful in responding to user prompts seeking factual information or analysis” and “neutrality” demands prioritizing “historical accuracy, scientific inquiry, and objectivity” while acknowledging “uncertainty where reliable information is incomplete or contradictory.”

Experts have pointed out that the order has a split personality: Its opening section deploys sharp culture-war rhetoric attacking “woke” ideologies, using language clearly aimed at a political audience. For example, the order makes several references to diversity, equity, and inclusion (DEI), stating that it is “[o]ne of the most pervasive and destructive … ideologies” and pointing out that it “displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.” Yet the executive order’s operative sections that govern federal AI procurement are technocratic and largely reasonable. In this regard, experts have pointed out that the order applies only to the AI models the government procures, not to AI companies’ entire operations, and sets a low compliance bar through the disclosure of relevant information, such as specifications and evaluations. As such, there is a clear tension between the order’s ideological framing and its conventional policy substance.

There are reasons for concern wherever formal or informal pressure is exerted on developers to align AI systems with the government’s preferred viewpoints. This is more than a hypothesis, given recent signals in the social media sector. For instance, in January, Meta made significant changes to its content moderation policies, including ending the third-party fact-checking program in the United States, and allowing more speech by lifting restrictions on some topics that are part of mainstream discourse. When a reporter asked if these changes were made in response to his threats toward Meta CEO Mark Zuckerberg, President Trump responded: “Probably.”

The most recent White House executive order on AI titled “Ensuring a National Policy Framework for Artificial Intelligence reflects both destabilizing trends outlined above. On the one hand, it seeks to address the growing patchwork of state AI laws; on the other, it deploys some language that appears shaped by the broader culture wars. The order states that it is U.S. policy to strengthen global AI dominance through “a minimally burdensome national policy framework for AI.” It introduces several measures aimed at addressing state laws that conflict with this policy and at ensuring, among other objectives, that “censorship is prevented.” These measures include creating an AI litigation task force to challenge state AI laws deemed inconsistent with federal policy, as well as restricting federal funding for states whose AI legislation is considered excessively onerous.

The order also directs the secretary of Commerce to publish an evaluation of existing state AI laws, identifying those that may compel AI developers or deployers to disclose or report information in ways that could run contrary to the First Amendment. At the same time—reflecting the ongoing culture wars—the order requires an assessment of laws that would require AI models to alter their “truthful outputs.” Similarly, it instructs the Chair of the Federal Trade Commission to issue a policy statement explaining the circumstances under which state laws mandating alterations to the “truthful outputs” of AI models are preempted by the Federal Trade Commission Act’s prohibition on deceptive acts or practices affecting commerce.

Across the Atlantic, Europe has also failed to provide a clear vision for staunchly protecting free speech in AI. The European Union is increasingly willing to trade freedom of expression for safety, in a context of increasing concern for disinformation and foreign information manipulation, illustrated by the recent adoption of the European Democracy Shield.

Rather than a patchwork of legal frameworks, the European Union has adopted the AI Act, which establishes rules that will affect AI governance across member states. Unfortunately, these rules raise their own free speech concerns. The EU’s new regulatory architecture—which includes both the AI Act and the Digital Services Act—establishes vague systemic-risk obligations that require companies to mitigate “negative effects on society as a whole.” Under this framework, companies must address societal harms, “radicalizing” content, and “hateful” content—terms so open-ended that they risk incentivizing preemptive restrictions of lawful political speech. Recent enforcement signals, including national regulators criticizing chatbots for producing “biased” voting advice and content “offensive” to government officials, show how these provisions can quickly drift toward viewpoint policing.

For now, younger democracies are also unlikely to provide a robust alternative to the Chinese model. Brazil, for example, is at an inflection point. On paper, the country’s framework is strongly protective of expressive freedom, with the right explicitly enshrined in the constitution and a strong performance in major international indices measuring free speech. However, the political climate has shifted toward a more interventionist approach. In particular, courts and electoral authorities have increasingly sought to restrict broad definitions of harmful or misleading digital content, even temporarily blocking access to the platform X in 2024 for refusing to ban several accounts deemed by the government to be spreading misinformation about the 2022 Brazilian presidential election. In 2025, the Supreme Court ruled that a provision providing a safe harbor for internet platforms from liability for third-party content unless there is a judicial takedown order, similar to Section 230 in the U.S., was partially unconstitutional. According to expert Carlos Affonso Souza, who drafted the Brazil chapter of our report, this case set the stage for Brazil’s Supreme Court to adopt a more interventionist posture in light of growing concerns about online harms. Similarly, an AI bill under debate includes vague requirements for high-risk systems, similar to those in the EU, that would give regulators wide control and may lead AI companies to over-comply. As our report notes, such provisions could chill innovation and narrow the spectrum of available speech, especially in contentious political contexts, opening the door to stricter rules for liability in defamation cases and requirements for deepfakes that, while aimed at fostering transparency, may not adequately distinguish between malicious deepfakes and legitimate uses of AI such as parody, art, or activism.

Democracies Urgently Need a Free-Expression Model

Collectively, these trends reflect a dangerous pattern: Democracies are drifting toward precautionary, speech-restrictive approaches at the very moment China is offering its model as the global template.

There is no doubt that the AI era presents new challenges that governments, industry, and civil society must address. But researchers and policymakers must consider how to confront these challenges while safeguarding the fundamental right to free expression, and how to respond to China’s efforts to influence the global AI governance framework.

We offer a different proposal. To protect and promote democratic freedoms in the AI era, governments should not be empowered to dictate which viewpoints AI systems may express. Instead, democracies must articulate a governance framework grounded in freedom of expression, pluralism, and user empowerment.

This means recognizing the central role AI-generated content will play in access to information and expression in the future, and ensuring that this content is also protected by the fundamental right to free speech. Without such recognition, countries would be emboldened to impose significant restraints on a medium relied on by hundreds of millions, if not billions, of users.

Countries should also reject political demands for “neutral” or “unbiased” AI similar to those proposed by the White House or European regulators, clarify vague systemic-risk prevention requirements such as those included in the EU, strengthen protections for lawful but contentious speech in line with the First Amendment, and resist new forms of digital anticipatory censorship modeled on those in the Chinese system.

The AI race is not only about who creates the most advanced models. It is also about who sets the rules for what billions of people may read, say, and imagine. On that front, China is already running ahead. Democracies still have time to catch up, but only if they take the lead by championing the freedoms that make open societies strong.


Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech at Vanderbilt University. He is a former official at the Organisation for Economic Co-operation and Development and a former attorney.
Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is also a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE).
}

Subscribe to Lawfare