Foreign Relations & International Law

The Trump Administration Targets Europe’s Content Moderation Laws

Kenneth Propp
Monday, January 12, 2026, 10:12 AM

New U.S. visa bans against Europeans over content moderation likely will elicit an EU response.

European Parliament, EU, EU member countries
European Parliament building, Strasbourg, France. (European Union, https://tinyurl.com/54f5wnp9; CC BY 4.0, https://creativecommons.org/licenses/by/4.0/)

Just before Christmas, the Trump administration raised the stakes in its campaign against laws in Europe that require social media platforms to exercise vigilance against illegal content, hate speech, and disinformation. On Dec. 23, 2025, Secretary of State Marco Rubio issued determinations under the Immigration and Nationality Act (INA) barring five Europeans closely associated with content moderation activities from entry into the United States and ordering them deported if found in the U.S.

Rubio acted under a section of the INA allowing the exclusion of foreign persons on grounds of “potentially serious adverse foreign policy consequences.” His announcement provocatively described the five individuals as part of a “global censorship-industrial complex,” leading “organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose.” The announcement further accused the five of “hav[ing] advanced censorship crackdowns by foreign states—in each case targeting American speakers and American companies.”

The most prominent name on the list is Thierry Breton, a former vice president of the European Commission and an architect of the EU’s Digital Services Act (DSA), its flagship legislation to regulate online content. During the 2024 U.S. presidential campaign, Breton dispatched  a letter to Elon Musk regarding his online interview with President Trump, warning that “spillovers” of U.S. speech into the EU could spur the commission to take retaliatory measures against X under the DSA. Breton resigned from the commission soon thereafter, reportedly under pressure from commission president Ursula von der Leyen.

The four others hail from European nongovernmental organizations (NGOs) active in policing the internet. Josephine Ballon and Anna-Lena von Hodenberg lead a German group, HateAid, that assists individuals facing online abuse and violent threats; Clare Melford runs the Global Disinformation Index, a U.K. group that steers advertising firms away from websites associated with disinformation and harmful content; and Imran Ahmed directs the U.K. Center for Countering Digital Hate, an organization focusing on online hate and disinformation. (Ahmed, who is currently in the United States, has brought legal action challenging his deportation.)

The European Commission quickly issued a statement that it “strongly condemns” the U.S.’s actions, reiterating its “sovereign right to regulate economic activity in line with our democratic values.” France’s President Emmanuel Macron decried the U.S. measures as “intimidation and coercion aimed at undermining European digital sovereignty,” while Germany’s minister for foreign affairs, Johann Wadephul, rejected them as “not acceptable.”

U.S. Visa Law and Speech

The Trump administration’s vitriolic rhetoric may be startling, but using visa denials to address controversial speech is not a novel tactic. The INA has long granted the executive branch authority to exclude foreign persons for security or foreign policy reasons, as Congress has documented. Past U.S. administrations of both parties have resorted to it periodically.

One especially controversial episode occurred when the Reagan administration barred several Latin American and European leftists from the United States who had been invited on speaking tours by U.S. academic institutions. The administration acted then on the basis of an INA provision that authorized excluding individuals for activities that are “prejudicial to the public interest, or endanger the welfare, safety, or security” of the United States. The inviting organizations sued, asserting a First Amendment right to hear the speakers’ views. The cases divided federal courts in Washington and Boston. (Disclosure: The author was part of an interagency team that defended the Reagan administration in these cases.) A later reform of the INA replaced this authority with the foreign policy exclusion power.

Rubio, unlike Reagan, did not act to protect Americans from hearing first-hand foreigners’ speech that the executive deemed harmful. Instead, Rubio’s rationale appears to be that the foreign laws these five individuals helped devise or implement could cause U.S. platforms to remove speech from social media that represents American viewpoints. These companies, when acting in the United States, enjoy First Amendment protection to decide the extent to which they will engage in content moderation. In Europe, platforms must abide by the DSA and national content moderation laws. Many have chosen, for reasons of simplicity and risk avoidance, to establish unitary global terms of service that incorporate the requirements of European law. As a House Judiciary Committee Republican staff report earlier this year complained:

The threat to American speech is clear: European regulators define political speech, humor, and other First Amendment-protected content as “disinformation” and “hate speech,” and then require platforms to change their global content moderation policies to censor it.

European Content Moderation Law and Speech

U.S. critiques of European content moderation practices tend to concentrate on the EU’s DSA, which, supplemented by official codes of conduct, provides a harmonized framework across the EU for assessing and mitigating content risks. Designated very large online platforms (VLOPs) bear the heaviest burden, including assessing systemic risks arising from their activities. They must examine whether they are disseminating not only illegal content but also legal content that has “negative effects on civic discourse and electoral processes and public security.” The DSA’s preamble instructs platforms to “pay particular attention” to misleading or deceptive content, such as disinformation.

The European Commission leads efforts to ensure platforms comply with the DSA, and it wields the powerful club of fines that can amount to 6 percent of a company’s global revenue. It is incorrect, however, to view the commission as a ministry of truth when tackling disinformation. Instead, the commission investigates whether VLOPs have adhered to the law’s specific requirements to remove content that has been found to be illegal or hateful, and to provide transparency on how they implement their internal terms of service.

In December 2023, the commission began an inquiry into posts on X of illegal content related to Hamas’s terrorist attacks against Israel. In December 2025—in the first exercise of its power to impose fines—the commission resolved a part of the investigation that was not related to content moderation by fining X 120 million euros. The commission found that X had deceptively designed its “blue checkmark” for verified accounts, that X’s repository of advertisements failed to meet transparency and accessibility requirements, and that X had placed improper limits on outside researchers’ access to its public data.

Although the fine was lower than what many observers had anticipated, Musk reacted aggressively by calling for the abolition of the EU. The U.S. visa bans that followed at the end of December likely constitute the Trump administration’s response—so far—to the commission’s X verdict.

The DSA, however, does not stand alone. Rather, it is entwined with the laws of its European member states. Article 3(h) of the DSA defines illegal content as information that does not comply with EU or member state law, and most determinations of whether speech is prohibited, or even criminal, are made under the laws of the EU’s 27 members. This reportedly stems from Germany’s insistence that the DSA include an express reference to national law as a basis for determining unlawfulness.

The model for the DSA’s notice and takedown requirements for unlawful content is Germany’s 2017 Network Enforcement Act (NetzDG). In Germany, unlawfulness is determined by reference to its criminal laws, which include provisions against insults directed toward politicians and state institutions, incitement to hatred (Volksverhetzung), revilement of religious communities, and Holocaust glorification. The stringent speech laws are a legacy of Nazi abuses and have been described as a doctrine of “militant democracy.” Some empirical evidence indicates that NetzDG led social media platforms to err on the side of deleting more content than may be legally required, under an “if in doubt, delete” philosophy.

How the Transatlantic Speech Conflict Could Escalate

Despite the sharp rhetoric surrounding the United States’s visa bans targeting Europeans associated with content moderation activities, the administration’s action was less extreme than it could have been. It notably did not sanction any current EU or member state official. (The administration may have been guided against doing so by a limitation in the foreign policy visa ban authority that precludes sanctioning officials of foreign governments for beliefs or statements that would be lawful within the United States.)

Nor did the United States initiate an investigation into EU digital laws under its Section 301 trade authority, as U.S. Trade Representative Jamieson Greer recently floated, which could result in tariffs or other penalties. The administration also opted not to impose Magnitsky Act sanctions, financial measures that can bar commercial interaction with U.S. companies and freeze designated individuals’ assets. Such sanctions, designed for use in situations of gross human rights abuses or corruption, were levied last summer against Brazilian Supreme Court Justice Alexandre de Moraes for investigating former President Jair Bolsonaro. (The sanctions caused a political backlash in Brazil and were quietly withdrawn several months later.) Sen. Eric Schmitt (R-Mo.) has called for the administration to impose Magnitsky sanctions in response to European content moderation.

Ultimately, the Trump administration is unlikely to back away from its campaign against European content moderation laws. Washington views these efforts as a signal of support to its populist allies in Germany, France, the United Kingdom, and elsewhere. The administration is also undoubtedly responding to the disgruntlement of U.S. social media platforms like X with the burdens imposed by the DSA.

Similarly, the EU and its member states can be expected to defend their content moderation legislation vigorously. Such legislation could conceivably become even stricter in response to these pressures. Germany’s Federal Ministry of Justice and Consumer Protection, for example, will reportedly seek to lengthen potential prison terms for the crime of inciting racial hatred, and to prohibit convicted persons from running for electoral office for up to five years. The latter penalty, if enacted, could handicap the right-wing populist Alternative for Germany (AfD) party currently surging across the country.

One conceivable way for the EU to deescalate the situation would be to soften DSA enforcement against platform companies based in the United States. However, EU parliamentarians, already frustrated with the slow pace of the DSA’s enforcement, could use their oversight powers to dissuade the European Commission from such a course.

More likely, pressure from the European Parliament and member state governments will lead the EU to respond to the U.S. visa bans in some fashion. Several members of the European Parliament have already suggested injecting the issue into trade negotiations with the United States or taking action against senior U.S. tech executives who actively support the Trump administration. The topic could come onto the Parliament’s agenda as soon as this month. At a moment when Europe’s governments feel besieged by Russian disinformation and face serious electoral threats from populist parties, forbearance in the face of the U.S. campaign against content moderation seems beyond the pale.


Kenneth Propp is senior fellow at the Europe Center of the Atlantic Council, senior fellow at the Cross-Border Data Forum, and adjunct professor of European Union Law at Georgetown Law. He also advises companies on transatlantic digital policy. From 2011-2015 he served as Legal Counselor at the U.S. Mission to the European Union in Brussels, Belgium.
}

Subscribe to Lawfare