Cybersecurity & Tech Democracy & Elections Foreign Relations & International Law

Regulation or Repression? How the Right Hijacked the DSA Debate

Renee DiResta, Dean Jackson
Monday, June 2, 2025, 4:40 PM
The conversion of a disinformation code from self-regulation to requirement sparks an international tiff. What’s really happening?
Flags outside the European Commission, Dec. 20, 2016. (LIBER Europe, https://www.flickr.com/photos/libereurope/30917370724, CC BY 2.0, https://creativecommons.org/licenses/by/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

On July 1, the EU Disinformation Code of Practice—a set of voluntary guidelines aimed at addressing disinformation in European political discourse—is set to transition from an obscure piece of self-regulation into the Code of Conduct on Disinformation, a recognized component of the Digital Services Act (DSA).

This may sound like a minor administrative shift in the field of tech regulation. But if you listen to prominent conspiratorial voices on X—and now in the U.S. State Department—it is a seismic event: the day free speech dies and a new era of global totalitarian censorship begins. Nevermind what the State Department or the Trump administration is doing on the free speech front—disappearing protestors, surveilling social media, revoking visas for students who write op-eds. Twitter Files writer Michael Shellenberger recently declared the European Parliament “the greatest threat to free speech in the Western world,” and like-minded crusader Matt Taibbi called the DSA “the most comprehensive censorship law ever passed in a Western democracy.” Mike Benz, formerly alt-right anon Frame Games (who made content about how Hitler had some decent points, and got moderated), is framing the upcoming transition on podcast appearances: As Trump is solving the domestic censorship crisis, American “censors in exile” and their “allies in foreign governments” plan to use “foreign censorship laws to coerce American social media companies and American citizens about what they can and cannot post online.”

European tech regulations do not determine what American citizens can or cannot post online. Yet on May 28, Secretary of State Marco Rubio tweeted that, “For too long, Americans have been fined, harassed, and even charged by foreign authorities for exercising their free speech rights.” Rubio’s missive, which offered no examples, was accompanied by a more official press release announcing that the State Department would restrict visas for foreign officials engaged in such alleged acts of censorship, as well as visas for their family members. And with that, the increasingly-familiar journey from baseless claim to enacted policy happened once again.

While right-wing populists are using the upcoming July 1 integration as a news hook to allege that Western Europe has succumbed to authoritarianism, most technology policy professionals have barely registered the development at all. The Code’s integration into the DSA has been largely treated as a technical update to a sprawling piece of legislation. The DSA itself is primarily a transparency and accountability law. It requires that “very large online platforms” (VLOPs)—platforms with more than 45 million monthly active users in the EUconduct risk assessments, disclose how their systems function, share data with researchers, and offer meaningful recourse to users whose content has been moderated. The regulation also includes explicit protections for free expression, such as the right to appeal moderation decisions and receive explanations when posts or accounts are restricted. The transparency components are the opposite of censorship.

There are legitimate criticisms of the Disinformation Code of Practice—and of the DSA’s notion of “systemic risk” more broadly—that non-hysterics have made for years. “Disinformation” often includes legal speech, and the Code never clearly defines the term, creating the risk of overreach or politically motivated enforcement in the wrong hands. For better or worse, it leaves the platforms with broad discretion over how to identify and mitigate disinformation, but requires them to develop and enforce publicly documented policies, and to report on how they address common tactics like impersonation of public figures, deceptive advertising, and coordinated networks of fake accounts. Some analysts argue the European regulatory framework is too enamored of quantifiable compliance metrics—post removals, account suspensions—at the expense of deeper structural reform. There is also the concern that the regulation might become vulnerable to abuse in the hands of the wrong government. Regulators should be working to mitigate these issues to ensure the protection of civil liberties.

These are complicated and nuanced questions, not cause for apocalyptic outcry. Yet because few things cause the average person’s eyes to glaze over faster than discussions of European tech regulation, the narrative battlefield is tilted toward the cranks. They can simply yell “tyranny,” even as the bureaucrats and politicians responsible for the policy rarely spend time clearly articulating its purpose. As a result, the public conversation is dominated by those most invested in misrepresenting it.

What is the EU Code of Practice on Disinformation?

The EU Code of Practice on Disinformation was initially drafted in 2018 as a form of self-regulation. Born out of a working group that included industry representatives from the Big Tech companies, advertising industry players, researchers, and civil society organizations, its goal was to “counter the threat of disinformation” in the European Union. To this end, the working group jointly created a set of voluntary commitments. These included:

  1. Creating a shared definition of terms like “political advertising” and “issue-based advertising” as well as a common framework for disinformation “tactics, techniques, and procedures” so that metrics could be compared meaningfully across time, national borders, and platforms;
  2. Improving transparency in political advertising and preventing “purveyors of disinformation” (for example, clickbait farms or pages operated by foreign state media or intelligence operations) from benefiting from ad revenue and placement;
  3. Making best efforts to root out coordinated inauthentic activity, such as automated bot networks, impersonation, or influence operations by state actors;
  4. Investing in measures like fact-checking and media literacy to help consumers access diverse and high-quality sources of information;
  5. Guaranteeing data access for researchers;
  6. And releasing transparency reports and other materials to help assess the code’s implementation (these were eventually aggregated into a transparency center).

After an assessment in 2020 and EU guidance for improving the code in 2021—such as by having clearer metrics, procedures, and definitions—stakeholders convened again to release a “Strengthened Code of Practice on Disinformation” in 2022, which updated the 2018 code while stressing that “fundamental rights must be fully respected in all the actions taken to fight Disinformation.” At present, the strengthened Code has 40 signatories across both industry and civil society.

Despite—or perhaps because of—its self-regulatory nature, compliance with the Code was often incomplete and haphazard. Corporate transparency reports often provided only topline detail about enforcement of platform policies covered by the Code.

It is important to note that several of the large tech companies—notably Meta—played a significant role in shaping the Disinformation Code of Practice. They not only agreed to it, they helped write it. Meta’s early report submissions tout their “deep involvement” in the process, praise the fact-checking component, tout the positive results of their efforts, and reiterate the importance of the whole-of-society fight against disinformation. However, corporate commitment to the code has begun to shift with the political tides. In 2023, Elon Musk pulled X out of the Code and, in early 2025, Google, Microsoft, Meta wavered in, or attempted to renegotiate, their commitments to fact-checking or certain advertising restrictions. TikTok said that it would stay signed up to the commitments provided that other signatories did likewise.

Despite the wavering, the EU announced plans in February to integrate the previously-voluntary Code into the DSA effective July 1. The DSA is not voluntary at all; failure to comply with DSA provisions comes with fines as large as six percent of global annual turnover. For a company like Meta, that could be billions of dollars. It is a potentially significant shift—one that has inspired attention among those who pay attention to conversations about tech accountability and free expression.

In principle, platforms can still opt out of the Code. But in practice, its inclusion under the DSA means they’ll be expected to show that they’re mitigating disinformation as a systemic risk—using tools and processes that closely resemble those laid out in the Code itself.

It’s worth noting that several of the Disinformation Code of Practice’s provisions already have counterparts in the DSA; this is not a sudden bolt-on of entirely new rules. The DSA already guarantees data access for researchers, and includes a transparency reporting requirement—one that is more robust than what the Disinformation Code of Practice requires. It includes mandatory risk assessments for VLOPs, which require that they demonstrate that they’re auditing and mitigating systemic risks.

However, as scholars have noted, the DSA offered a vague definition of “systemic risk,” raising concerns that it might enable a slippery slope of overzealous enforcement. It seems that the Disinformation Code of Conduct appears to have been positioned, in part, as a way to address that problem. As Stanislav Matejka, vice-chair of the European Platform of Regulatory Authorities, told Tech Policy Press, “Since the DSA does not explicitly define systemic risks related to disinformation, the Code outlines concrete measures that signatories apply to combat these risks.”

Why Talk about This Now?

The Code of Practice and the DSA have drawn plenty of criticism—but while some are grounded in good-faith concerns about overreach, an increasing percentage are part of a broader campaign to delegitimize content moderation and deny that disinformation campaigns are real. Many of the American politicians shouting the loudest about the Disinformation Code of Practice and the DSA, such as House Judiciary Committee Chairman Jim Jordan (R-Ohio), are prominent election deniers who have spent years attacking institutions that studied election disinformation—and companies that moderated election disinformation—seemingly in an effort to advantage their political allies. Now, that apparent effort to maintain an advantage is extending to preferred “civilizational allies,” which include far-right political organizations and leaders such as Alternative für Deutschland (AfD), Marine LePen, Viktor Orbán, and others in Europe. 

Accomplishing this requires reframing Europe’s regulation of private companies operating in its market as a grave and illegitimate threat to global freedom.

Through distortion and oversimplification, figures such as Rep. Jordan have falsely claimed that the DSA’s risk assessments for deceptive activity —which explicitly require “particular consideration to the impact on freedom of expression”—are government mandates forcing platforms to remove legal speech. In reality, the DSA does not and cannot require the removal of content unless it is illegal under European law, and any requests must be disclosed publicly. That didn’t stop Vice President J.D. Vance from giving a falsehood-riddled speech in Munich, darkly warning of “thought crime” and the “retreat” from free speech in Europe. U.S. leaders are warning of severe repercussions: In addition to Rubio’s plan to ban the families of people who might do a content moderation that “censors an American” (whatever that turns out to mean),  Sen. Mike Lee (R-Ariz.) is publicly musing about a U.S. withdrawal from NATO.

These lines are expected from hyperpartisan, performatively aggrieved politicians. But in Silicon Valley, tech executives are suddenly singing from the same hymnal. Mark Zuckerberg suddenly announced the end of its partnerships with “biased” fact-checkers, first in the United States but eventually elsewhere, days before Trump’s second inauguration, and pledged to work with the new administration to “push back” on Europe’s “ever-increasing number” of laws “institutionalizing censorship.” Joel Kaplan, a longtime Meta global policy executive, called European fines a “tariff” on U.S. companies in a speech a month later.

Meta has skin in the game: It is currently under investigation for potential violations of DSA rules on deceptive advertising, for demoting political content, for lack of safeguards during European elections, and for inadequate measures for flagging illegal content. While the merits of these allegations can, and will, be debated, well-reported harms to users on Meta platforms (for example, widespread scams) suggest regulators may have legitimate cause for concern.

X is also under investigation, and for a broader array of potential violations. EU Commissioners have opened formal proceedings to probe X’s approach to removing content illegal under EU law, the effectiveness of the “Community Notes” system, the reliability of its new system for verifying user identity, and its lack of transparency mechanisms. Commissioners are also exploring whether X’s recommendation algorithms violate EU law. (This last point of inquiry came after suspicions that Musk might use X to favor Germany’s extreme far-right party AfD in the country’s elections—the same party that the State Department referenced in its recent Substack post.)

It is not unexpected to see platforms attempting to get out of regulations and fines. Meta and X in particular appear to have opportunistic views on where values end and business begins; Meta recently argued in a U.S. court case that it doesn’t have any legal obligation to minimize scams on its platform. But this is a sharp change from the rhetoric in the platform’s own reports touting the successes of, for example, its fact-checking initiatives at reducing the spread of false and misleading content on its platform. 

Similarly, the political right’s cries of totalitarianism and censorship over the DSA and the Disinformation Code of Practice are a rather histrionic about-face when we consider recent tech bills supported by Republican lawmakers. The Platform Accountability and Transparency Act, for example, sought to create a process whereby the National Science Foundation could facilitate access to social media data for research purposes. The Honest Ads Act, a bipartisan bill sponsored by Sens. Amy Klobuchar (D-Wis.), Mark Warner (D-Va.), and Lindsay Graham (R-S.C.), sought to require transparency in political advertising. If these kinds of disclosure and oversight measures were once seen as commonsense transparency and accountability, calling them “authoritarian” now doesn’t hold up.

And while it’s counter to U.S. speech culture and First Amendment law to legislate something like a fact-checking requirement, it’s worth remembering that fact-checking is not censorship— it’s adding more speech, and it’s an intervention that enjoys broad public support. Community Notes is a promising innovation that more platforms should adopt, but it only works if the users trust that the system is fair—which requires transparency. Unfortunately, there’s no requirement for American social media companies to be transparent. There’s also no requirement that they take down networks of inauthentic bots (another thing U.S. users support). One day, Twitter was a global leader in transparency and researcher access; the next, it revoked application programming interface (API) access and began charging up to $42,000 a month for data that academics previously accessed for free. Musk additionally fired much of the integrity team responsible for investigating foreign interference (in part due to agitation by Benz). Voluntary systems work until they don’t. That’s what regulation is for. The DSA regulators may ultimately accept Community Notes as a way to meet fact-checking requirements—but if so, it will likely come with a governance framework.

The U.S. regulatory approach to Big Tech has been built almost entirely around voluntary commitments: platforms self-regulate, (some) researchers negotiate access to data, and the public trusts that content moderation decisions are made in good faith. Europe, by contrast, has taken a heavier hand. The European approach can be vague, at times cumbersome, and overbroad. But the downside of the American approach is that when a platform decides to walk away, there’s nothing to stop it. 

Ultimately, EU law does reflect a different approach to rights balancing—but not an authoritarian one. There’s plenty to critique in how the DSA or Disinformation Code of Practice have been implemented, and work to be done to refine it. But the loudest voices in the conversation aren’t engaging in good-faith policy critique. They’re engaged in a very deliberate effort: tech executives looking to avoid obligations they once voluntarily embraced, and populist politicians looking to bolster their ideological allies abroad by framing regulation as censorship. If we let these actors dominate the regulatory narrative, we don’t just miss the opportunity to improve the regulations that exist—we risk undermining the very idea of responsible governance online.


Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She is a contributing editor at Lawfare.
Dean Jackson studies democracy, media, and technology. As an analyst for the January 6th Committee, he examined social media's role in the insurrection. Previously, he also managed the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace and oversaw research on disinformation at the National Endowment for Democracy. He holds an MA in International Relations from the University of Chicago and a BA in Political Science from Wright State University.
}

Subscribe to Lawfare