Cybersecurity & Tech Foreign Relations & International Law

Lawfare Daily: Censorship, Civilizational Allies, and Codes of Practice: How European Tech Regulation Became a Geopolitical Flashpoint

Renee DiResta, Daphne Keller, Dean Jackson, Joan Barata, Jen Patja
Tuesday, June 10, 2025, 7:00 AM
What is the European Union's Disinformation Code of Practice?

Published by The Lawfare Institute
in Cooperation With
Brookings

Lawfare Contributing Editor Renée DiResta sits down with Daphne Keller, Director of the Program on Platform Regulation at Stanford University's Cyber Policy Center; Dean Jackson, Contributing Editor at Tech Policy Press and fellow at American University's Center for Security, Innovation, and New Technology; and Joan Barata, Senior Legal Fellow at The Future of Free Speech Project at Vanderbilt University and fellow at Stanford’s Program on Platform Regulation, to make European tech regulation interesting. They discuss the European Union’s Disinformation Code of Practice and its transition, on July 1, from voluntary framework co-authored by Big Tech, to legally binding obligation under the Digital Services Act (DSA). This sounds like a niche bureaucratic change—but it's provided a news hook for the Trump Administration and its allies in far-right parties across Europe to allege once again that they are being suppressed by Big Tech, and that this transition portends the end of free speech on the internet.

Does it? No. But what do the Code and the DSA actually do? It's worth understanding the nuances of these regulations and how they may impact transparency, accountability, and free expression. The group discusses topics including Senator Marco Rubio’s recent visa ban policy aimed at “foreign censors,” Romania’s annulled election, and whether European regulation risks overreach or fails to go far enough.

For more on this topic:

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Daphne Keller: The DSA does not do anything to give Europe more power over speech that we can say and see here in the United States. It, there's no expansion of, you know, when Europe might claim so-called extraterritorial authority. It is intended as a law about what gets seen in Europe.

Renee DiResta: It's The Lawfare Podcast. I'm Renee DiResta, contributing editor at Lawfare and associate research professor at Georgetown University McCourt School of Public Policy. Today I am joined by Daphne Keller, director of the Program on Platform Regulation at Stanford Cyber Policy Center; Dean Jackson, contributing editor at Tech Policy Press; and Joan Barata, Senior Legal Fellow at the Future of Free Speech Project at Vanderbilt University.

Dean Jackson: And I think that there's an element of the administration says we're gonna wield a big stick on the world stage. We don't really care if these are your laws that you're enforcing only on your territory, you're gonna do things the way we say. And the fewer rules there are, the fewer the big blocs of countries that are really invested in norms and values and law, the more they can get away with, right, the more coercive diplomacy they can exercise, the more corruption.

Renee DiResta: In this episode, we discuss the European Union's Disinformation Code of Practice and its transition on July 1 from a voluntary framework to a legally binding obligation under the Digital Services Act. We unpack what the Code and the DSA actually do and why they became a target of U.S. political backlash.

[Main podcast]

This sounds like the most boring thing in the world, but stay with us because it is actually very interesting. It's so interesting that Marco Rubio and JD Vance and assorted very loud paranoias on X have been going on about how it is the harbinger of global tyranny and the end of free speech worldwide. This is not actually true. So we're gonna discuss what is true and why we now have a policy in the U.S. that revokes visas from European bureaucrats who censor Americans. Does it actually apply to anybody?

If you don't know what the Disinfo Code of Practice is or what the Digital Services Act does, don't worry because we are actually going to begin right there. I think maybe Daphne, I would love to start with you because you have done a ton of writing on the DSA over the several years it took to craft this legislation to see it come into force. Maybe you could explain to the audience what the Digital Services Act actually does.

Daphne Keller: Sure. So this is a giant piece of legislation. It would be, you know, five or 10 laws here, but for our purposes, we can think of its rules being basically in two big buckets.

One is for most platforms, including pretty small ones. There are a bunch of new process rules for how they moderate content. So if they take something down or demote it or demonetize it because it's illegal or because it violates their terms of service, the DSA adds all these things about notifying the user of why it happened and giving them an appeal and having transparency, things like that.

And notably, these are things that global civil society had been asking for for a long time, and they’re are also things the Trump administration has been asking for. This is part of what the FCC thinks should be, you know, layered on top of Section 230, for example. So that part shouldn't be super controversial, and in fact, that's what's missing from the Take It Down Act, this change to Section 230, that that passed recently in the U.S. with Trump's endorsement. We didn't bother to provide these kinds of procedural protections. So DSA beats U.S. law in protecting free expression in that respect.

The second big bucket is for the biggest platforms, this much kind of mushy new regulatory relationship with the European Commission, where they're subject to ongoing audits that are really sweeping and that have changed the content moderation into kind of a compliance functions—which is something I've written about in Lawfare—and also creates this leeway for the regulators to tell the platforms what to do in a way that I think is the source of most of the concern about free expression here, and, and that's where the, the codes of conduct fit in as part of the biggest platform’s duty to mitigate risks in this way that I think a lot of free expression advocates find a little too mushy and ill-defined.

Renee DiResta: So I think about it as somebody who's not a lawyer and who's read it a few times, there's the transparency piece, which I think is related both to—there's algorithmic transparency in there, there's something that gives researchers data access, which I think people have begun to ask for. I think people have begun to actually ask X, for example, for data sets related to the performance of AfD during the German elections, right?

So now you can actually ask platforms for data access. And then I think there's also transparency related to moderation policies. So things that they did voluntarily, kind of sometimes here in the U.S. or that they used to do and now they don't, but they're required to do it under the DSA. Would that be an accurate way to describe it?

Daphne Keller: Yeah, that's right.

Renee DiResta: Okay. And then the user rights piece of it, that they have the right to appeal again, that there, there’s sort of like a mandatory right of appeal if your content gets moderated, which they also don't have here in the U.S., but then the risk mitigation piece, these sort of things where there's this vague hand wavy language that they use about, that's the phrase systemic risk where nobody in the U.S. quite knows what that means, and it's not clear that, that it's particularly well defined over there. And that, that I think is the area where the people in sort of civil liberty space are a bit uncomfortable with the language.

Daphne Keller: Yeah, and there's a like for real legal question that should play out in the EU about whether that risk mitigation language creates leeway for the European Commission to pressure platforms to take down speech that is legal in Europe, speech that's in that lawful but awful category under European law, and the Disinformation Code is one of the places where that really important legal question might come up most acutely. So there is this open, hard question in in European law about it.

Renee DiResta: And the, I wanna bring Joan in a second to talk about the Code of Practice specifically, but I think the other big thing to understand is that the DSA is not optional, right? This is if you are a private tech company participating in the European market, you are regulated under this thing and the fines for not being compliant in these various areas, in the transparency area, in the auditing area, in the user rights area—you're subject to fines up to something like 6% of annual global revenue. So that's why the tech companies are upset about it.

So I think maybe let's talk about the Disinformation Code of Practice, which is previously separate from the Digital Services Act, but now on July 1, becomes part of what's called the Code of Conduct. Maybe you can tell us a little bit about that.

Joan Barata: Yeah, sure. So it's a piece of what is called co-regulation. And the European Union and the European legislature has been showing certain interest in establishing what is called co-regulation as a softer way of regulating certain industries. And we, we saw that already in other sectors, in the audiovisual directive.

Co-regulation means that this is an instrument that was mainly drafted by, let's say, the industry—platforms mostly—but then has been validated by the Commission, by the, by the European institution, so that institution, so that it becomes a threshold, it becomes a legal standard that will be used by European institutions and particularly the Commission as the regulator to assess whether platforms fulfill their obligations when it comes to content moderation according to the DSA, particularly when it comes to the systemic risks.

Also, something interesting is that, I mean, it is binding. I mean, according to the, to the letter, everything that is included in the Code of Conduct is binding for those who sign it. But even for those who didn't sign it, I mean, it might also be binding in the sense that, of course, when the European Commission will assess how all the platforms deal with problems related to civic discourse, they will also use the standards of the Code of Practice, at least as a guide to assess whether platforms are doing are, are, say, acting correctly or not.

Renee DiResta: So my understanding, so the Disinfo Code of Practice—just to take one small step back, my understanding is that it does sort of maybe a few main things. It says that the platforms have to have a policy that they write for disinformation.  So each platform, that again, as you note, participated in crafting this regulation, so they wrote it. Just, I wanna make sure every listener understands that 'cause I think that's actually a big deal. So Meta helped write this thing that, as we're gonna talk about it, is now kind of running away from, right? So the platforms helped write it.

And so they write their own policies, but they have to have a transparent, articulated policy, then this makes them responsible for reporting on how well they adhere to their own policy. So it's not that the European Union bureaucrats, if you will, write their policy, it's that they write their policy, but then they're responsible for putting out all these transparency reports and things sort of explaining that they're adhering to their policy, but they are also supposed to disclose how they're dealing with things like fake accounts and bots and deceptive ads, and how they're collaborating with fact checkers, and I think there's like a media literacy component–they're supposed to show how they're giving users tools to spot, you know, fake or misleading content.

And my understanding was that one of the main complaints, again, the same way we have systemic risk is very poorly defined in the DSA, or maybe this is like a cultural thing. Maybe, maybe you can comment on that. Disinformation is very sort of vaguely defined in the Disinformation Code of Practice.

Joan Barata: That's correct. I think that's something that we need to keep in mind is that basically we have two areas of intervention that are of concern or particular concern of the European institutions. One has been hate speech, hate speech online that has triggered some documents, recommendations, etc. The other one has been disinformation.

However, these are two completely different legal concepts because hate speech is illegal. Hate speech is illegal in all the EU member states and member states, and all the states in the international community have in fact according to international law, the legal obligation to ban hate speech. I know that in the U.S. this would be a controversial discussion, but there's an article in the International Covenant on Civil and Political Rights that says that hate speech should be prohibited by law, but at least this is what the Europeans do.

The problem is that this information, which is another big concern, as I said, is not illegal or at least shouldn't be illegal, international human rights standards say that this information, as such, it's not a legitimate limit to freedom of expression. So this information is one of these categories of speech that basically falls under that, I mean, label of harmful—let's say lawful but awful if you, if you, if you want to call it this way.

And this is what makes, I mean, this problem a bit tricky, because as a matter of fact, if there was a law, I mean a direct regulation from the EU imposing obligations to platforms in terms of getting rid of this information that would be inconsistent with human rights, right? So that would be a main, a main problem. So the European Union needs to find a way to force platforms to deal with disinformation while at the same time not crossing a red line, let's say, in an area where such limits shouldn't be, shouldn't be imposed.

So that's, that's the first, that's the first reading, and this is why the Code of Practice goes a little bit around, I mean, all these matters. It doesn't say new platforms, you need to ban this information, but it says new platforms need to adopt certain measures to deal with the harms connected to this information as part of your obligation to tackle systemic risk and to mitigate systemic risk.

Renee DiResta: Right. And, and the hate speech lives in a different code entirely.

Joan Barata: Exactly. Right.

Renee DiResta: And the, the hate speech though, I think, I know that the DSA has a notice and take down, which for the listener means that the government can request content come down and notice. And take down rule, in the EU they do have them for hate speech. They do not have them for disinformation, is my understanding.

Joan Barata: That is in principle correct.

Renee DiResta: Okay. I understand there's a slippery slope argument, and we're gonna get to that for sure, but, but that's my understanding of the letter of the law, so to speak, just making sure we understand what is actually written into the regulation. Okay.

Daphne Keller: I wanna jump in on the question of how much this agreement is quote unquote voluntary, and how much it's just, you know, platforms committing to what they wanted to do anyway, or what they to their own policies.

So I, I was not in the room for any of this negotiation, but I was at Google for 10 years and I feel like I have some sense of how these things go, which is the platforms go in wanting an agreement to only do what they wanted to do anyway, right, and then once it's done, they wanna insist it was a big concession, you know, big deal, big commitment. And the government, the regulators go in wanting the platforms to commit to doing way more than they do now, and then once it's done, they wanna insist, oh, this is just voluntary, this is just what the platforms want it to do, right?

And this is why we have a 50 page document explaining what they've agreed to rather than a one sentence document, and that leaves all kinds of like slightly ambiguous language in the, the Code that lets you know, one side go out and claim it means one thing and the other side go out and claim it means the other, and, and where the rubber meets the road will be in how the Commission interprets it in enforcing and whether the platforms choose to go to court and, and fight over it.

But, but I think that some of the Americans who are traveling to Europe and getting alarmed by what they think is going on, it's not because of what the Code actually says—that's not the alarming thing—it's that they're hearing Europeans who including regulators whose interpretation is oh yeah, the platforms absolutely have to take down disinformation. They absolutely have to you know, do these demotions, etc. And so, you know, that's one side's take on what it says. But that is what is so triggering for a lot of people in the current administration, I think.

Renee DiResta: Well, I think that the hate speech piece of it gets muddled with the disinformation piece of it, and it all gets sort of turned into a, a, a big ball of, of complaint and rage and aggrieve, which just to be clear again, and I think we'll, we'll get into this in a second which serves political purposes for the people who are who are very much kind of driving the discourse about it here in the U.S.

Do you wanna talk maybe about what happened with—I remember one of the former EU commissioners who's no longer serving now, Commissioner Breton, made that rather unfortunate kind of stupid comment when Elon was interviewing Donald Trump. Do you, do either of you wanna discuss that debacle?

Joan Barata: Yeah. Well that was, I mean, indeed quite stupid. And also, I mean, I would say it was even against the spirit of the DSA itself.

Renee DiResta: Right. Maybe summarize it so that people who are listening who are not in the weeds know what it was.

Joan Barata: Yeah. Well, there was this famous interview between Elon Musk and I mean, or Elon Musk interviewing the, the, the candidate, if I'm not mistaken at, at that time. And then Terry Briton, who was the commissioner in charge of let's say digital affairs in the, in the EU, who was the facto regulator, basically Tweeted about that and warned about a positive violation of the DSA directly connected to this specific piece of content and basically indicated that that was not acceptable under, under the, the EU law and there would be consequences.

So there's more or less, that was in a, in a nutshell, what, what happened. I mean, that was illegal. I mean, that was contrary to the spirit of EU, EU law, human rights system because first of all, the DSA when it comes to these kind of systemic risks in these areas civic discourse, elections, etc., what the DSA says is that platforms need to take a systemic approach, and then the Commission is supposed to assess whether this systemic approach is correct. But it's not about the specific pieces of content, no, it's about how—the procedure, the, the, the mechanisms, the measures, the policies that platforms have, can, can really deal tackle issues of disinformation in general.

So, so one, focusing on one specific piece of content is something that the European Commission is not supposed to do. The European Commission cannot give instructions when it comes to specific piece of content, particularly if we consider, that the problem was, not that it was illegal, but it could be harmful in terms of public discourse.

So that's one thing, but the other thing is that there's something that is even more simple, which is Article III of the European Convention on Human Rights that says, that establishes the the right of a fair trial and that it says that any procedure, even an administrative procedure, when someone might be declared responsible for an illegality, needs to respect presumption of innocence, needs to respect impartiality, so on and so forth. So the judge of that case was already anticipating on Twitter what he was going to decide on a case where he even still had not opened a formal procedure yet.

So, I mean, this was wrong as you see at so many levels. But I mean, the good thing is that it triggered a strong, very strong reaction from civil society groups, and I mean, in a way that was one of the final steps of, of Thierry Breton's political career in Europe. I, I believe it was kind of a final nail, the coffin.

Daphne Keller: And Renee, I think this is a great example of what I was talking about before of where politicians in Europe go out and overstate what authority they have under the DSA in ways that I think are horrifying to the regulators who are serious about the law and understand its details. And I think we're just gonna see that dynamic, we see that dynamic in the U.S. too. Politicians like to overstate their authority.

Renee DiResta: Right, and I think that's one of the tensions that we have here, where you have some, I think some really, you know, solid things, particularly the transparency, the right to appeal some of the, the things that are positive, but unfortunately then it gets overshadowed by, by these moments.

Dean, you've argued a few times just to bring you in here about the importance, for example, looking at reasons why self-regulation is not enough. We've talked about notable examples where platforms have really walked back their voluntary offerings, if you will, here in the U.S. and maybe you wanna talk about that, the, the difference between that, the voluntary walk backs versus the inability to now walk back in Europe.

Dean Jackson: Yeah, I mean, to start, many of the companies that were involved in the drafting of the Code of Practice on Disinformation are now, have now either left it or are trying to leave it, and so the sort of big glowing exit door sign is an obvious flaw in the self-regulatory approach. It requires a certain amount of willingness to play ball from companies.

But another one is simply that in leaving—and I, I wouldn't necessarily want the law to do otherwise because there are a lot of free expression concerns here—but in leaving so much of the process of risk assessment and mitigation to the companies themselves, you know, it, it prevents, I think them from deinvesting in trust and safety in the way that they have, right?

I mean, just very recently, Meta announced plans to replace human risk assessors with an AI function, and the engineers would be able to, to take an AI response to a description of a product change and run with that as their human rights risk assessment. And that's wild to me, coming from a company that ess than 10 years ago was sort of publicly apologizing for playing a role in the Burmese genocide, right? Like we have to bear in mind that these are global companies, global platforms that are used for speech worldwide, and that in some places the consequences for mistakes can be really, really dire, right. I mean, the, the conversations we're having, I think in the United States and in the European Union can seem like small potatoes when you stack it up against like the potential for violence and conflict situations, and that hasn't stopped companies from sort of backing away from previous commitments, from letting go staff and key roles.

I also think I wanna nod to Daphne's Lawfare piece on the compliance function of trust and safety in platforms, which I thought was really well argued. There is a sense in which trust and safety becomes a sort of dial turning exercise when you have all these voluntary commitments that are essentially reporting commitments and they, they force you to collect everything as data and what you, what can, what can be measured is what gets worked on when sometimes the problem is sort of at a higher level, at a policy level, right, are you really taking these human rights risk assessment seriously? Do you have policies against the types of hate speech that you're likely to see?

When Mark Zuckerberg walked back his company's policies on anti-trans speech, that's a policy decision that he has in the United States, a first Amendment right to make, and if he implements his policies as stated, you know, he can do an audit report that says, look, we assessed a risk, this is the decision we made, these were the steps we took. I don't think that that would pass muster in Europe, because Europe has a different approach to hate speech, but he can demonstrate that they did the work to apply the policy, right? And that's a compliance function versus a sort of outcome that I think you'd want to see from internet governance. And so it's a really delicate tightrope.

When I say that voluntary self-regulation isn't enough, that's not to say that I wanna see the heavy hand of the state come in because the risks there are really, really obvious, but it does mean that I think we have to expect, and in some cases, demand that these companies do better than they are doing.

Renee DiResta: Let's, let's talk a little bit about the maybe the heavy hand of the state here, or the tension between the, the, the different heavy hands of the states.

So we have somewhat recently, Marco Rubio, the State Department, writing these Substack posts about the need for civilizational allies in Europe. Maybe a little over a month or so ago, J.D. Vance gave this talk in Munich about Europe fleeing free speech culture. I definitely wanna hear your take on that, Joan.

We had the argument recently also the new policy that I believe it was kind of directed at Europe as well as Brazil, that people who censored Americans on American soil—this was actually quite a complicated policy— an argument that they would be denied visas, their families would be denied visas also, This, this denial of visas, this framing of the DSA, and again, they're using the Code of Practice conversion as a news hook to frame the the, both the DSA as the digital censorship act, but also this conversion of the Code of Practice as the end of free speech, the harbinger of the onset of global tyranny.

Let's talk about again, the difference between real critiques—the word disinformation is quite vaguely defined, systemic risk is quite vaguely defined, the arguments we've just made discussing the potential for overreach—vs. what we've just seen, which is a denial of visas in response to an allegation that there are people in Europe with the capacity to reach across the Atlantic and censor American speech or sort of reach up through Central America and and censor American speech. I'd love to hear people's thoughts on those allegations and, and where we're going.

Daphne Keller: So a couple of concrete things. I think when the Disinformation Code of Conduct comes into effect on July 1, that will make no difference whatsoever to anyone's experience on these platforms. It's gonna add a bunch of costs and auditing complexity for the platforms, but like that news hook is nonsense.

Secondly, the DSA does not do anything to give Europe more power over speech that we can say and see here in the United States. It it's, there's no expansion of, you know, when Europe might claim so-called extraterritorial authority. It is intended as a law about what gets seen in Europe by people using the platforms in Europe.

So both of those are really important limitations on how big a deal this is or, or how, how valid those critiques are, but beyond that—kind of to get maybe more to the spirit of your question—like, I'm kind of here to criticize Europe's free expression background because they care about the rule of law and criticizing them and pointing out rule of law problems might actually have some consequence there, but we should be super clear, whatever so-called censorship might happen because of the DSA, it is nothing compared to what is going on in the United States right now, right. We are seeing media mergers held up because of reporting that the Trump administration, disapproves of, or local radio licenses held up. We're seeing law firms, shakedowns of law firms, we're seeing people arrested for the op-eds that they wrote. I mean, there, there is nothing going on under the DSA that holds a candle to the censorship that the administration is engaging in right here in the U.S. right now.

Joan Barata: Well, just, I have a, first of all, very candid remark, which is the fact that EU citizens don't need a visa to travel to the United States. So, but maybe, I mean, there'll be other instruments in place, but the reference to visas, I mean, when it comes to the EU is a bit absurd. I mean, unless you want to work there, of course. That's the first thing.

The second thing is that, I mean, the definition of censorship is something that, I mean, is quite open, and it depends on whether you take a, a strictly legal approach, which means that there's a government somewhere who censors, who decides what can be published, what cannot be published. And then of course, yeah, you can also understand that as illegitimate, non-legitimate restrictions to the right to, to freedom of expression. But it's not clear, I mean, what's the meaning of, of, of, of let's say censorship in this in this context. And also it's, it's very hard to anticipate, I mean, how this will be enforced or what kind of restrictions would trigger this consideration of censorship.

But above all, I mean, what I see here is something that I've seen in my work in I, I have to say in authoritarian states so far, no, which is the fact that freedom of expression is what I consider to be the right thing to say. And if you don't say the right thing or what I like to be heard, then that is not that. That is not this freedom of expression. And this seems to be what is in mind, I believe, of, of those politicians in the U.S. that now are claiming that, that there are some restrictions that are being imposed on, on them, so, so on and, and so forth. So that is, that is my main concern that, that these kind of vague, arbitrary law will be used to basically curb dissent or attack those who take a position that is against the, the, the official policies of the state.

Unfortunately, this is not new. This is a very old trick. And we see that, I mean, we've seen that for the last decades in authoritarian states and before that in the communist block, that was very, very common practice. So, and the language, I mean, what is surprising is that the language is basically the same, so that's, that's the most concerning part, I would say, to be, to be brutally honest.

Renee DiResta: I think Dean, you wrote an article recently, which we'll link in the show notes, that the visa restriction threats are intentionally vague, which potentially covers legitimate efforts by foreign officials to address things that are illegal even in their own markets, so as, as John noted earlier, hate speech, for example, or election interference or even incitement to violence,  Maybe you can talk a little bit about how you were thinking about that in that analysis, how vagueness serves political objectives and how perhaps this isn't just about defending free speech, but signaling support for specific sort of far right parties and allegiances internationally.

Dean Jackson: Yeah, I think Joan made the argument pretty well, but I'll do my best to add onto it.

I think first off, when I first saw this, my first instinct was that this felt kind of petty, right, because there's so little operational detail. You know, if you think about the consular officer who has to grant a visa there, there is no way for them to interpret this. There's so much subjectivity in it. What, what counts as censorship? What counts as enforcing it?

I also had the question of, you know, EU officials don't need visas. Marco Rubio later mentioned in a tweet, Latin America, which most people took as a coded reference to Brazil, so it might apply differently there, but it, it is a sort of strange move, and also, you know, there's so little meat on the bone that that in a way is the story.

I wanna back up a little bit to the day before Rubio's announcement when the State Department released a longer piece on Substack—which I didn't realize the State Department had a Substack, but you've gotta diversify revenue streams these days. The piece was written by a State Department official who a few years ago in right wing media made the argument that the right needs to redefine freedom of speech in ways that allow it to suppress harmful speech, like critical race theory, right? He was talking about freedom to pursue, you know, their version of a good society as opposed to freedom from government imposition—a pretty radical redefining of freedom in the American context.

This official—fast forward a few years—is now writing the Substack piece in which he talks about European censorship, and if you listen to J.D. Vance's speech in Munich, this Substack piece reads sort of like fan fiction based off of that canonical entry, making very similar claims saying that Europe has kind of lost its way as a bastion of Western values, which is something that the right these days talks about a lot. And then saying that we need civilizational allies in Europe, right, political forces that will put Europe back on a track that the administration agrees with.

That piece comes out the day before Rubio announces both through press release and on Twitter that he'll be implementing these visa restrictions and as Stephanie and John have just both said, intentionally vague, right? The, there's no way really for anyone either sitting in Europe or Latin America or in a U.S. embassy or in the State Department to interpret these rules. Maybe there will be internally more guidance that could be applied only for sort of like media value, right, to create a spectacle if you deny a visa to someone, really high profile, but it's, it's very unclear.

That could create a chilling effect. You know, maybe some governments step away. David K., in the piece that we wrote for Tech Policy Press , told us it could also do the opposite. In Europe, it could embolden officials to, because they might fear that if they back down, they'll be seen as caving, and so it may actually sort of put them in a corner where they have to take stronger action.

But when we started thinking about why do this, why do this now? What's the motive here? I kind of came up with three. One is that this is a purely sort of domestic propaganda play, right? Like the right has invested a lot of political capital and this narrative about censorship, let's make an easy win, announce a meaningless policy. The media will have a field day with it. We'll look good for our fans, great.

I think that's there, but I don't think that's everything. There's also an economic angle. The DSA has become an unexpected, I'd say, centerpiece in trade negotiations between the U.S. and Europe. The censorship narratives have played into that, and the companies have also leaned into this saying that DSA represents attacks or a tariff on them in addition to imposing these censorship requirements. So there's an economic element there too.

But what I think has really been under analyzed is the sort of ideological element, and I wanna go back to that language, civilizational allies. It's pretty clear to me that elements in the State Department, in the administration writ large, maybe not everyone, but definitely a faction, they really see establishment Europe, the traditional, traditional U.S. allies, Europe as, as we've known it for decades as an adversary, right? And they see the rising sort of right wing populist movements in Europe as more natural allies. Many of these movements are Euroskeptic, right? They're skeptical of the EU. They share some of the language around censorship. They share a lot of the sentiment around immigration and migrants.

And what this really makes me think about, and Joan, I'm so glad you mentioned the way that former authoritarian regimes have used tactics like this. It makes me think about the way international affairs analysts used to talk about Vladimir Putin, like circa 2012, right? As someone who really wants to upend the rules of the game, turn over the chess board, as it were. And I think that there's an element of the administration that says we're gonna wield a big stick on the world stage. We don't really care if these are your laws that you're enforcing only on your territory—you're gonna do things the way we say and the fewer rules there are. The fewer big blocs of countries that are really invested in norms and values and law, the more they can get away with, right, the more coercive diplomacy they can exercise the more corruption, they can carry out in broad daylight.

That's a, that's a world order in which they're much more comfortable operating and one I think they're striving to create. And so there's a, a geopolitical sort of long-term element to this too, where this move can be seen as a, a sort of so to who they see as their allies on the European far right.

Joan Barata: If, if I may just add something that the biggest civilization, civilizational ally of Donald Trump in Europe is Viktor Orban in Hungary, and Viktor Orban is the biggest censor in Europe and not only in the online world, but also Viktor Orban has canceled licenses of television, radio stations in some cases that had American interests.

And the, the U.S. was, was kind of fighting Viktor Orban in some cases because it was directly, I mean, he was directly affecting the interest of, of American businesses that were in the field of, of Hungarian media. So, I mean, if they had to target someone, they would need to target the biggest ally that they have in Europe, which is Hungary. So that here you have, I mean, the absurdity of of, of the whole thing as well.

Renee DiResta: I think we also saw a response from European Commissioner Henna Virkkunen, who kind of dropped a bunch of statistics in response saying 99% of online content removal cases between September 2023 and April 2024 related to content that was taken down from online platforms under the platform's own terms and conditions—only 1% was triggered by trusted flaggers. Just 0.001% of cases raised by trusted flaggers led to an actual takedown decision.

She made a bunch of comments actually about discussions about euthanasia being removed by U.S. platforms under American law, naked images, statues, and other nude artworks not being censored in the EU, but being barred under U.S. platforms. So I think she's trying to raise these kind of cultural differences and differences in what counts as censorship, what counts as approved content and just kind of highlighting these sort of the stats. Apparently they've also been going back and forth with Jim Jordan. Apparently members of Congress came to, to have this conversation as well.

I'm curious, I do wanna sort of move on and, and spend some time discussing what we would like to see, 'cause I feel like this is one of the things that always gets lost, right? You have a lot of complaining about the regulation, not so much about what we should want to see in the world. And I know all of you have written on that.

I guess I wanna, let's wrap this kind of section of the conversation maybe by asking—I'm very curious, and one thing I haven't really been able to, to see very much of, except in the replies on social media—to what extent the repression narrative is gaining traction in in the EU, to what extent is this sort of like a U.S. culture war trope that plays very, very well among our audiences here? You really see it overrepresented on X. Is that a majority illusion or is this something that is. You know, what is the public perception of this discourse in European media, European spheres, the European discourse?

Joan Barata: Well, what I would say is that in Europe, what what you still see is, what is particularly appreciated is the fact that European politicians have decided to introduce regulations that affect and limit the excessive power of American platforms. And this narrative has been there and continues to be there, particularly now with, with Trump getting really harsh against the European Union.

So what, what we do is that, I mean, even cases—like in my case, I've been very, very critical with certain aspects of the DSA, now I feel forced, as you say, to highlight the good things about Euro, but the, the DSA, because I don't want to see myself aligned with, or someone. I believe that I, I I'm, I'm aligned with, with certain positions, So I mean, even, I mean, persons like me now, we are engaging a little bit more in defending or better explaining, I mean the European system also with its flaws, but also with, with some of the things that are said that are completely, completely untrue. So that would be of course, in terms of public opinion.

In terms of experts, I mean the, the situation we see that politicians are also standing firm so far, but but we also need to see, because this is a complex negotiation I'm sure that the issue of taxes and other elements that are not directly related to, to the digital sphere will also interfere in this kind of exchanges, dialogue, etc. So we still need to, to see what what will happen.

Also, let's say the, the, the, the European regulators are a bit astonished because, I mean, their, their feeling is that now that we had started to, to get ready to implement the DSA. Now, it seems that perhaps we'll not have to implement the DSA or we'll be asked by the Commission to implement the DSA in a different way. Now, now that we finally, we, we, we, we, we gathered all the resources, the energies, etc. Now, now it seems that landscape will change again.

So there's this, I mean, let's say spirit of resistance, there's also some perplexity and there's a certain level of of uncertainty as well. I mean, truth be told.

Daphne Keller: So I, I wanted to follow up quickly on, on Dean's points, which I thought were really good about sort of what's really being negotiated here with the visa threats. And I, I think we know that at least Trump personally, and presumably the administration more broadly has this negotiation approach that's like, there's some things that I want and there's some leverage that I have, and I'm gonna use that leverage to get those things.

So the things that the administration wants, probably top priorities have to do with tariffs and NATO. The things that a platform like Meta wants, probably the top priority is not the DSA; it's things like the GDPR for privacy and the digital markets act for competition and antitrust issues, both of which are bigger threats to the bottom line or to the, the future of the business for the platforms. But where the leverage is is in threatening visas and in complaining about censorship because that, you know, rallies the troops here in the U.S. and, and makes Americans more amenable to the agenda.

So I think that's why we're seeing these things get mixed up that don't necessarily have anything to do with each other, and it's important to keep an eye on who actually wants what and what's just a tool to get it.

Renee DiResta: No, I think that's, I think that's a great point. I think we've also seen, you know, Mike Lee, one of our senators, making that point about oh, well maybe we should get out of NATO if they're going to censor us. That was one of the sort of first volleys prior, even to the to the visa, the visa situation, so you definitely do see that rhetoric coming in there in this question of you know, where we are.

Maybe Daphne, starting with you, kind of, what recommendations would you give EU regulators right now, both around avoiding the sort of weaponization, but also in terms of as this code converts, how to improve it, how to, how to avoid—you know, we don't, we don't wanna fall into the trap, as Joan says, of pretending it's fantastic just because, you know, there are people who are saying even dumber things about it.

Daphne Keller: Yeah, so the DSA itself is a piece of legislation that was drafted really carefully to try to protect free expression in a lot of its provisions, and it went through a legislative process with lots of negotiation. So it has, you know, democratic legitimacy and it is broadly intended to meet free expression requirements.

Where the Commission or European regulators could screw that up is if they use an instrument like the Disinformation Code of Conduct to abandon all of that carefully calibrated and negotiated legislative balance and layer on a bunch of stuff that lawmakers never could have gotten as part of legitimate legislation.

So, for example, the trusted flagger provision of the DSA says, the flaggers have to be vetted. They have to publish transparency reports. They can be terminated from the program if they're abusing their authority. They're only supposed to be using it to report illegal content. There are these constraints, and if the sort of an alternate category of trusted flaggers under the Code of Conduct suddenly doesn't have all of those constraints, that is not a good outcome. So that is where I think European lawmakers should be very careful right now.

Renee DiResta: Joan, I know you you wrote something really interesting recently about the Romanian election. Maybe in the context of what we should be doing differently, I'd love maybe for you to, to talk about where you thought the DSA failed in that particular case and what it should be doing differently.

Joan Barata: I. I think that the case of the Romanian elections is, is, is a very interesting one because it, it triggered the annulment of the election, which is a very, very strict measure a measure that basically challenges the very important moment of democracy. And I think that should be seen as a victory for the Kremlin who, who showed that by interfering, by doing certain things, they, they can just challenge or can force a European country to annul an election process. So it was a big victory for, for the Kremlin.

I would say in this sense, I mean, it's clear that the, the, the DSA didn't fulfill the promises when it comes to the, to the, to the Romanian elections? No, because apparently, I mean, all platforms had, let's say, submitted the risk assessment analysis. The platforms had been more or less following the guidelines established by the European Commission for elections. They had been in, in touch with the European Commission on how to, let's say, deal with Romanian elections. Everything seemed to be more or less okay, like in other cases, in other European countries, and all of a sudden, the unexpected happens. And a candidate that nobody knew, who had declared zero expenses in, in, in, in the campaign, he won the election.

That that was—so, it was obvious that this is like a plane crashing? It's, it cannot just be one element that can be used to explain, I mean, the, the, the crash. There, there's a series of, of elements that show the weaknesses of Europe. It's true that probably in terms of the instruments to assess systemic risks and to work on systemic risks, I mean, there were some shortcomings that were not, not, they were not detected even by the Commission, but also what is true is that this showed, I mean, what can happen in an European country if certain measures in the field of promoting democracy in general can happen if they are not implemented, no?

Here we are talking about Romania. This is a polarized society. Nothing was done to, let's say, address this matter. Traditional political parties were extremely corrupt. There were issues of public transparency. There were issues of of media concentration, oligarchs controlling the, the, the public discourse. Traditional political parties had used TikTok to promote that strange candidate because they assumed that that was a good way to divide the far right. So we are talking about the traditional political parties pushing for this new candidate, and, and the whole thing went completely completely wrong.

Also, apparently, I mean, there was total lack of coordination between the. Digital Services Coordinator and the Election Commission when it comes to, I mean, sending requirements to platforms. So you see it's ensemble of, of many, many different problems that have to do with let's say constitutional provisions, constitutional safeguards, the democracy in Europe also.

For example, it was that the reason why this person won the elections in this, this first round that was annulled is thanks to the diaspora the immigrants, the Romanian immigrants in other European countries who basically use the Orthodox Church as their social network to socialize and to, let's say form their opinions, and the Orthodox Church is extremely conservative, and apparently it was also behind, I mean, the victory of this radical gentleman.

So, I mean, I think that unless we have a proper understanding of the shortcomings of European democracies, particularly young European democracies, structural problems, issues related to traditional media concentration, that being there for for decades, and they're still there, we wouldn't find a solution. It's very easy to say no, it was because of TikTok. No, it's true maybe TikTok didn't, didn't do their job properly. No, but it was far more than that, no? And the only way, again, to address the problems related to election, this information, etc, is to have this, this approach that considers, I mean, the, the whole picture.

For example, yeah, we can talk about social media infiltrated by by Russia, but the question is that that didn't happen on the eve of the elections. Russia had been working for 10 years to create the network that then at some point was activated during the election. So, and during this 10 years, nobody did anything to prevent I mean these network from being activated.

So also this is another shortcoming, clear shortcoming of the Code of Practice. It, it's only reactive, huh? I mean, how platforms should react when things go wrong. And this is very, very limited approach. I mean, it has basically, it has nothing specific when it comes to considering the whole communication space, not only the specific social media platform when it comes to, I mean, assessing the kind of threats that the different state can be, can be planning, coordination at the European level when it comes to when, when certain threats are perceived, so on and so forth.

So there are so many things that have to do with this matter that also, I mean the, the, the Code of Practice, just a little piece that ticks the box of justifying we're doing something about the online world, but probably will not, I mean, be enough or surely will not be enough to address what we, what we need to face.

Dean Jackson: I if I can just react to that really quickly because I think you, you did a great job laying that out, and I agree with everything you said. I just wanna hammer home that social media didn't create the problems in our societies or in our democracies, and so we can't expect tech regulation to fix all of them. That's often said as a way to hand wave away the importance of social media and disinformation. And that's not what I'm trying to do here, but you're right that there are a lot of factors that go into something like an election. It's a very complicated machine.

That said, I think the, the DSA like asking it to intervene acutely in a time period, like an election, which in Europe is often, is often much shorter than American elections—American elections are now perpetual people are campaigning for president right now—but in Europe, you know, they're, they're often sort of much more limited to just a few weeks. And you're right, it's re, it's a reactive set of tools and so it's never going to catch things like those Russian networks in time.

I think something that's really important though is that over time there are all of these transparency mechanisms in the DSA and in the Code of Conduct that might shine a better light on how the online ecosystem works, what problems exist within it, and how we can respond to them, right? And one of these is provision for access to data for academic researchers. They're also reporting requirements for, you know, if you take something down, users can request a reason, right? There's, there's a sort of, there's a way to mediate those conflicts between users and platforms. We don't have those things in the United States. It's actually a really useful transparency tool we could use and that would solve a lot of the problems that the Republican Party raises about social media here.

But all that said, I also think your point about campaign finance and the role of the Kremlin in the Romanian elections points to something I thought was really interesting in the U.S. 2024 election was there are so many aspects of these problems that can just be treated as law enforcement challenges, right? If a foreign country is funneling money to a candidate, that's a crime. It's something that can be investigated. It's something that you can issue subpoenas and arrest warrants for. And that's something that Merrick Garland's Justice Department, I think did a very good job of, without crossing the line into getting mired in debates about political content here.

And so that's just an example of how, like you can widen the toolbox beyond tech because the problem set is so much broader than tech. But then over the long term, I think the DSA once again is, is really something that is supposed to play out over many years, rather than thinking of it as something that like intervenes in an emergency.

Another example of that is are just the audit reports that it requires. We saw the first round of those come out last year, and they were kind of a mess, right? They didn't, they’re, they're not standardized in any way. Like they're hard to parse. They don't say anything of great value, but the exercise of having to do them lets regulators and companies sit down and say, okay, what do we wanna see next time? How do we get to a more usable set of reports?

And hopefully over time, combined with things like academic researcher access to platform data, we just get a better picture and understanding of how social media works and what its impacts on our societies are, and that equips us to make better policy in a better, more boring world. That's how the DSA will be playing out, and we wouldn't be having these conversations about censorship.

Renee DiResta: I know we're coming up on time. I'll, I'll just say that the Twitter moderation research consortium—rest in peace—you know, we had fantastic and entirely voluntary researcher data relationship, you know, sharing relationships there that Stanford Internet Observatory pioneered with Twitter at the time, where what would happen for those who are unfamiliar is that Twitter would identify networks of inauthentic actors, or we would sometimes identify networks of inauthentic actors like Russia's Wagner Group operating in Africa, and there would be a bi-directional conversation hey, we're seeing this thing have a look, right? And then if it was confirmed that it was an inauthentic actor, the pages of the accounts would come down under the inauthentic activity policy, and we would do an investigation. We would publish our findings publicly. You know, putting them out for the media, for the public, for people to understand what these campaigns had done, whether they had some impact, who was seeing the content, you know, all, all the different things that we could—well, who was seeing the content was the hardest part, but whether the content was receiving engagement, maybe is how I should put that.

We would do these reports, and then sometimes Twitter would actually hash the username so that they were obscured and release the data fully to the public so anybody could request the downloads of those data sets. They gradually stopped doing that because they became concerned that foreign governments were looking to try to dig into those data sets, particularly authoritarian governments. And so one of the, there were some questions about whether to gate that, whether to have some privacy, you know, some, some gating to researchers and, and sort of vetted individuals, but there were these, these sort of transparency efforts that were pioneered. And then they completely stopped because when ownership changed hands, the new owner did not want this to happen, and that program ended.

And that's one of the examples of voluntary transparency versus the ability under the DSA for European researcher to request those data sets which we do not really have, or it's unclear whether we have that here in the U.S. unless we can make a systemic risk argument.

But I wanna thank the three of you for joining me today for this conversation. It's been a fascinating discussion, and as we've seen the debate around both the Disinformation Code of Practice and the Digital Services Act involves balancing legitimate concerns about disinformation with the essential protection of free expression. The challenge ahead for European regulators and for democratic societies worldwide is pretty clear. We have to think about confronting real threats without falling into traps, you know, set by ideological or political opportunists as well.

So to keep following this critical conversation, keep listening to The Lawfare Podcast, reading the work of our esteemed panelists here. I'm gonna throw a lot of stuff into the show notes, and thank you all so much for joining us today.

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org.

This podcast is edited by Jen Patja and our audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from alibi music. As always, thank you for listening.


Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She is a contributing editor at Lawfare.
Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.
Dean Jackson studies democracy, media, and technology. As an analyst for the January 6th Committee, he examined social media's role in the insurrection. Previously, he also managed the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace and oversaw research on disinformation at the National Endowment for Democracy. He holds an MA in International Relations from the University of Chicago and a BA in Political Science from Wright State University.
Joan Barata is a senior legal fellow at the Future of Free Speech Project at Vanderbilt University and a fellow at Stanford’s Program on Platform Regulation.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare