Cybersecurity & Tech

Lawfare Daily: What’s Influencing Politics Online? X’s Algorithm, Creators, and the New Persuasion Machine

Renée DiResta, Nathaniel Lubin, Philine Widmer, Jen Patja
Tuesday, March 31, 2026, 7:00 AM
Do algorithms move political attitudes?

In this episode, Lawfare Contributing Editor Renée DiResta speaks with Nathaniel Lubin, co-author of “How Social Media Creators Shape Mass Politics,” and Philine Widmer, co-author of a recent Nature paper, “The Political Effects of X’s Feed Algorithm.” Together, they discuss two different layers of online influence—a platform’s algorithms and the trusted voices inside it—and their implications for mass politics.

The conversation explores what happens when recommendation systems shape what people see, and what happens when creators shape how people interpret it. They discuss whether algorithms move political attitudes by shifting exposure and salience, whether creators are persuasive because audiences trust them, and what these findings suggest about political influence in an environment increasingly organized by feeds, rankings, and parasocial relationships.

Additional reading: 

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Philine Widmer: In fact, in our study, we show that if people start using the algorithmic feed on X, as opposed to the chronological feat, they do change their political opinions and they switch their political opinions to the right.

Renée DiResta: It's the Lawfare Podcast. I'm Renée DiResta, contributing editor at Lawfare with Nathaniel Lubin, founder of Insights Studio, Survey 160 and the Better Internet Initiative and Professor Philine Widmer of the Paris School of Economics.

Nathaniel Lubin: There's lots of effects happening all the time everywhere. And your exposure to these platforms is sort of a layering effect of lots and lots of different nudges. And any individual nudge might be very small or might even close to zero, but you know, some of them are having effects and in aggregate they're having big effects. And so that point, I think is the thing that is what's most important

Renée DiResta: Today we are talking about social media and influence. The algorithm or the influencer: what's shaping political persuasion?

[Main Podcast]

One of the things that, that I really appreciated about your two different papers was that I thought that they brought together and highlighted two different aspects of layers of influence online, right?

So one is the platform itself, right? What happens when the feed changes, when the feed changes what people see. So Philine yours, I think your paper in Nature really focuses on that aspect of it. And then Nate, yours, what happens when people follow particular types of creators, creators that they trust? What is the role that individuals play in influence and shaping political opinion?

And so I was hoping that we could talk about the intersection of these two things, because this question comes up a lot actually. Is it the algorithm or is it the influencer? Or is it both? Is it a combination of both?

Nate, you were the co-author of, “How Social Media Creators Shape Mass Politics,” a field experiment during the 2024 elections. And then Philine your paper was the political effects of X’s U.S. feed algorithm. Maybe we can start by going into your respective papers, summarizing them and telling the listeners a little bit about them. We'll start with Nate.

Nathaniel Lubin: Yeah, absolutely. So yeah, the paper is still preprint, “How Social Media Creators Shape Mass Politics: A Social Expirement.” So it was from the the 2024 context. So for my part, we were working on the program side of this originally from a nonprofit called the Better Internet Initiative, which is a fellowship for 501(c)3 nonprofit fellowship for content creators to integrate pro-social material that is fact checked and accurate into their feeds.

And so, because that program is sort of longer term in nature, we were able to plan ahead to do this research. And so in the second half of 2024, we worked with some other colleagues around the paper to do a sort of longitudinal assessment of how a subset of those participants in that program, as well as some other participants outside the program, you know, what the effect was of exposure to following those combinations of creators during that period.

So there's a very dense amount of information in that and a lot of different ways to cut it. But the, the kind of punchline is a couple things. One is in the control groups, people who spent more time online during that period had some shifts in their views and perspectives relative to people who were in the kind of basic control group.

Second finding was that interventions that were done pretty much universally, all had quite strong effects relative to what we would've expected. So they did shift their perspectives and, and knowledge of issues and some, some, some understanding of the world based on that. And then the main sort of comparison point of the paper was looking at what we called culturally or apolitical creators versus more politically oriented creators and people who ordinarily talk about politics writ large, not necessarily powers and politics, but partisan politics generally versus people who are mostly focusing on other issues.

And then this program was getting them to include more, you know, substantive material. And what we found was that on a sort of per video basis, the more cultural, less political people, those creators were more influential per video. So they had much larger treatment effects when you looked at the data that way.

Renée DiResta: Can you talk a little bit about how this intersects with, you know, your understanding of political influence and influencers over time?

Nathaniel Lubin: Yeah, I mean, it, the, the, this program or this research, you know, is based on sort of a larger project around thinking about how influence works and algorithms also very much a part of that.

So these are, as you pointed out at the beginning, you know, mirror images, the same sort of question. So we think about trying to improve information landscapes, understanding, you know. The algorithms and the sort of architecture of the platforms is one way to get at that. Another way is to change the lived experience of what humans on the platform are actually doing. In different contexts, those could have, you know, more or less effect.

So, you know, this, this is not kind of a either or kind of way to think about, it's sort of another way to cut that. And so, you know, from a kind of program perspective, you know, BII, Better Initiative was saying, you know, it would be great if, if some of the incentives were a little bit different. But, you know, we're gonna, in the meantime, try to help people who are interested in taking this challenge on do that more directly.

For my part, you we've thought about creators for a long time. I worked at the White House 10 years ago now, and so we were bringing in creators to the White House for the first time in 2014, 2015, the States of the Union and some other events. So we've sort of, that was sort of the genesis for me in thinking about other kinds of ways to get information into, in front of people.

You know, the sort of mental model of influence here is repeated exposure of any kind can have, you know, the kind of potential to persuade doesn't mean just because you're in someone's feed that it is effective, but it means you have the potential to do that. And so sort of thinking about the kinda linear sum of all the different influences that are happening, that's kind of the underlying theory of this.

Renée DiResta: That's a perfect transition, Philine thankfully, into your work. I'd love to maybe summarize your paper.

Philine Widmer: Yes, I, I love to give you an overview of our research. And so we start with the observation that algorithms decide what billions of people see every day, but at the same time somewhat surprisingly, this question of whether feed algorithms shape political opinions hasn't really found a quantitative or, or a conclusive, quantitative answer in the previous scientific literature.

A prior study found that turning off the algorithm on Facebook and Instagram had no detectable effect on political attitudes, which is a bit puzzling because anecdotally we have suspected for, for, for quite a while now that feed algorithms matter for political opinions.

And in fact, in our study we show that if people start using the algorithmic feed on X as opposed to the chronological feed, they do change their political opinions. And they switch their political opinions to the right. Outcomes that we measured or on policy priorities, so what do people think the government should address should the government prioritize say healthcare over immigration? We asked how people thought about the criminal investigations in into Donald Trump that were ongoing at the time in, in summer 2023. And we also asked people about their opinions on the war in Ukraine.

But what's, what was most puzzling to us is that switching the algorithm off didn't reverse those effects. So for people that, due to our study stopped using the algorithmic fee, we didn't find that they changed their political opinions. And this asymmetry between the two effects it also explains this puzzle from the previous literature because the study that I mentioned before, the previous study only looked at one direction.

So they were only able and to study what happens when people who had previously used the algorithmic feed. So what happens when these people turn off the algorithmic feed? And in our case, because we were studying X that already at the time had this choice between chronological and algorithmic feed. We could study the treatment in both ways, right?

People who came to the study and using the chronological feed. And we could randomize some of these people to using the algorithmic feed and people who came to the study and using the algorithmic feed, we could randomize some of them into, into using the chronological feed. So we can really see what happens in, in both directions.

And now a question you might have is, so why is there this asymmetry and what our data suggests is that it's driven by the accounts that people follow. So if you are using the algorithm, you also see content from accounts that you don't follow yet, and then you start following these accounts and when the algorithm is turned off you don't unfollow these accounts, meaning that this like, list of followings, this, this network on, on X that you have built under algorithmic influence is going to stay there even if you turn the algorithm off.

Yeah, and maybe one, one thing that I would like to add is we don't find effects on an political polarization, so this kind of feelings of like a warmth, ingroup and outgroup. But we do find these effects, as I mentioned before, on political opinions on, on current events. And, and of course this is speculative, we were only following people for seven weeks and we already found that they changed their their political opinions on, on current issues if, if they started to use the algorithm.

So of course, this begs the question because in reality, people use these algorithms for, for months or for years. This begs the question, what would happen in, in the long run to more deeply and held opinions.

And one thing that's, that's worth noting is that we combine different types of data. The first type of data is surveys at the baseline and end line, and where we ask people about their political opinions, but we also collect data and what people actually see under both settings. So what does a person see when they go on the algorithmic tab? What does a person see when they go on the chronological tab?

Plus we also collect the accounts that people follow, which is and this combination of, of these different types of data allows us to, to explain this mechanism behind the asymmetry. Another finding I would like to emphasize is that in this analysis of what the algorithm promotes or, or demos in that case is that we also looked at news outlets.

We find that the algorithm, demotes news outlets, so those are much less likely to appear in, in the algorithmic feed and the chronological feed on average around one fourth of the posts there are from news outlets than compared to a around 12% in, in the algorithmic feed. So this is really a stark decrease in, in the presence of news outlets, which obviously typically follow different standards for their content, like, fact checking and other types of things. And this might not apply to political activists that are operated a lot by the, by the algorithm.

Renée DiResta: The asymmetry is really the striking part. I wonder if we could maybe explain that a little bit. So switching the algorithm on, you see an attitudinal shift, but switching it off then does not lead people to shift back. And you talk a little bit about this, maybe you can explain why, why you believe that is.

Philine Widmer: Yeah, exactly. So that's true. That's probably the most striking finding and so I'm coming back to these different types of, of evidence that we gathered, and what we can show is that when you turn on the algorithm, you start following these conservative political activists.

But if we turn off the algorithm for you and you are still continuing on a list of followings that you build, were on the algorithm. So meaning that we switch, if we switch off the algorithm for, for these respondents. They still have the same type of accounts that they follow. And I think it's interesting from a policy perspective because there's this discussion around the right of opting out of the algorithm.

But if, if we take this asymmetry that we discover seriously, it casts some doubt whether this is sufficient, giving the stickiness of the effects. And of course we can't rule out that eventually, and people would, would also see effects in the other direction. But what we can clearly say is that if this happens, it's much slower than the change of opinions due to this turning on of the algorithm.

Renée DiResta: So you're seeing the feed shifts, the network essentially, that people are following, and then that, that network kind of outlasts the, you know, it persists even after they're, even if they toggle to something, to something subsequently, what percentage of them significantly changed who they were following during that time?

Because one of the things that's very interesting on platforms like Threads now and on platforms like TikTok is that who you follow doesn't actually matter very much on certain platforms in the sense that they have this concept of unconnected content, right? Where this is essentially the for you algorithm where they're just gonna push you stuff anyway.

And so what I have noticed on platforms like Threads is that people don't bother to follow you. They just assume that they're going to see you and, and you hear TikTok creators talk about this too, or that question of what actually, I see, Nate kind of nodding along here also, what actually motivates the follow right? What leads that sort of holy grail of the persistent connection to actually form? I'm curious what you see, in, in your research.

Philine Widmer: So I can't give you specific numbers of like a share of people. I can tell you that this switch, it leads to around 0.1 and 0.2 and standard deviation change in the probability of following these certain type of accounts to come back to your I guess like broader question.

Obviously this asymmetry that we discussed, it only makes sense in a world where there is a chronological feat, right? In a world where we only have an algorithmic feat and everything is just like we learn from your pattern, from your attention. This kind of deliberate choice of do I want to see more or less of this person?

It, yeah, it's not the right question to ask. And I think it's actually very important consideration for, for like platform regulation in some sense, because there's always this question about like, what would I like to see and what do I watch? Because there's also so that's not our study, but there's a, there's a growing literature on how these algorithms exploit and some biases.

That we all have as, as humans, like a sensationalist content, perhaps, very like emotional, emotional content. And so, so there's this thing of being like hooked by things that if we were asked to deliberately choose what type of experience would I like to have, what would I be like nudge, like to be nudged to and on this platform.

So, yeah, this this, this asymmetry is a bit specific to your question, where there is some form of like deliberate choice of yes, want to see more more of this. And I think in the meantime and on next, they've already changed like when you go to the, to the following tab now, and there's also some algorithmic curation in that now. So you have to like only choose to like only see the ones that you follow.

The broader takeaway, and here I'm obviously moving beyond what we show in this study, but something that I would find consistent with what we find is in some sense, the fact that we find this stickiness, it could easily apply to other things too. Because you just said like, people know I'm gonna see more of you if, if I watched your video so I'm not following you. That's in some sense also a learned behavior.

So whatever these algorithms do, I think this more general idea of these behaviors could be sticky. And then these kinds of the type of content you see or your opinions or how you go about the platform? We, we learn like the humans. We learn how to interact with the platform. So I think just generally cautions us a bit to look at the things very mechanically. Like we turn on algorithms, we turn it off, everything is reversible. I think it's it's something that we should be a bit more cautious with when thinking about policies that would that would improve the online experience on these platforms.

Renée DiResta: Lemme ask one more question that maybe ties into this question of feeds and, and removals. You're probably familiar with, I believe you guys referenced, in fact, some of the studies that were done with the big Meta studies, Meta the platform, and I think they were in science, but the studies that were also looking at the effects of turning off or shifting feeds and the what seemed to be non effects there.

Do you wanna talk maybe a little bit about the, the sort of intersection there, because I think they had also found that there was, you know, essentially no depolarizing effect to ceasing, to consume kind of algorithmically curated content. And the, you know, kind of public takeaway from that, the media takeaway and the coverage at the time was that that meant that there just wasn't an effect to the feed.

You showed something very different and this led to some very interesting discussion in the social science community about the difference in these findings, particularly as Twitter shows this this very strong pull to the right in your findings. How did you, how did you think about your work relative to that, that prior work that had suggested that, you know, oh, it doesn't matter. Turning it on, turning it off, it's all the same thing.

Philine Widmer: I think it's really it's exactly as you, as you put it. We're not contradicting that earlier study because what they looked at was turning off the algorithm and they showed that this doesn't impact your, your political opinions and they find exactly the same thing.

And even though it's a different platform, a different and time period. So it's really about this asymmetry. And it ties back to what I said before. I don't think we can mechanically think that turning on an algorithm is the same as turning it off. And that's what, what our empiric suggests. So in that sense we're, we're very consistent with what they found.

It's just, I, I think it, for us there is a bit of a, like a deeper question and going forward and also in future studies, like what kind of research questions should we ask? Right. 'Cause we also have a bit of a information asymmetry with respect to these platforms 'cause as researchers it's typically very hard to know how these recommender and algorithms to feed algorithms work.

And it's relatively hard for us to get access to data. So just, you know, like, it, it requires a certain amount of imagination by the researcher to, to think about, okay, why could there be not an effect if you turn it off, even though we have all this anecdotal evidence? And so in that sense, I, I think it just highlights that it's, it's also very important the kind of like the details of the research and questions that we, that we asked.

And another difference like very obvious is that they look at the meta platforms and we look at X and and in that sense, from a scientific viewpoint, the effects are always specific to the given platform to the given time.

But of course, one does wonder like what in what we find, is there something general, like, would we expect to see a similar, like right word pull on other platforms too? Maybe that's the right moment. To pass on the word again to Nate, because I think you also do speak with this question in your, in your paper, right?

Nathaniel Lubin: Yeah. I think, I think a couple, a couple of reactions to that one. One is I think we, I totally agree with the, the challenge in reconstructing feeds after the fact when people have already been influenced by prior exposures. I mean, the larger architecture of these platforms. Includes a whole bunch of different inputs.

Right? And so I think, I think as an example that people often forget in these contexts is the algorithms often are recommending people to follow or accounts to follow when you create an account, right? Or are introducing recommendations that are not the, the recommendations of the algorithm in the feed as part of the larger architectures more broadly.

So if a platform like Twitter just to take, or X just to take a random one, happens to have Elon Musk be promoted all the time, right? That might have an outside effect outside of feed itself in the same way that, you know, back in the day when Facebook, old school, Facebook would recommend Barack Obama when you launched account almost all the time. Like that had an effect as well.

So these things can, can work in different directions. I think that larger architecture around what people want is the thing that is missed in a lot of this right there. So, Philine's paper is amazing for showing a kind of contextual moment about a political effect.

There's some other papers I've seen that are, you know, arguing that there are engagement versus chronological differences in people's satisfaction with the, the feeds and, right. And so, a few of us in coordination with Knight Georgetown wrote a, a paper called Better Feeds, which was arguing of things again, but basically that people should have more control over choosing what they want and that the incentives should be around the effects of exposure to these architectures broadly.

And I think that's very consistent with what Philine was just saying, that if you just focus on the input side rather than the effect side, you're gonna miss most of the effect that you're concerned with.

Renée DiResta: So Nate, let's, let's focus in then on influencers for a couple minutes. So your study suggests that predominantly apolitical creators may be especially potent messengers because they're seen as more informative and trustworthy. I'm curious, can you talk a little bit about that aspect of the cultural creator component?

Nathaniel Lubin: Absolutely. So the distinction here is, so I mentioned before the apolitical creators are predominantly talking about topics that are in no way considered political.

And in fact, we did a, a classifier of the content feeds of the different groups, and there's a table in the paper that describes the ratios. So these ones are in the range of 10 to 20%. This was during the height of a presidential election. That's actually a quite low number relative to the ones that were classified as political.

More than three quarters of their content was considered that way. So that's sort of the setup of this. You know, we talk a bit in the paper about potential mechanisms, and that's a little bit more speculative. We don't have so much direct evidence for, for what the causal reasons for the difference in effectiveness are or our persuasion amounts are.

But we have some, some, some, some indirect indications here that are quite convincing. So one was people were following, continuing to follow the cultural creators less and less political group, much more after the study period ended. And so you sort of see this kind of connection that was, was made outside of the incentives that we were introducing as part of the research.

We also see sort of a pattern here where the, the more political groups seem to be having their effect based on frequency. So the number of videos that are actually shown during the intervention that are political is much, much greater for the political group than the apolitical group, which makes sense.

Those are channels that are talking about this anyway all the time. And so you sort of see this frequency and much more like advertising where that repetition is very likely the cause for the more political group. You know, there are many other research directions we're interested in going in future work based on this.

So sort diving deeper into the potential connections that people are making, kind of parasocial notions that are in the literature is something that we're interested in.

Renée DiResta: There's these studies that I've read that have come out arguing that polling, I think maybe is a better way to put this, not studies, but the people are just disinterested in news that they're just disinterested in politics, that they're kind of burned out when they're on social media. They're there for, for entertainment.

And this is one of the reasons why apolitical, supposedly content performs so well. Do you hear anything like this either from the creators that you work with directly in your prior work or that was reflected in the study itself?

Nathaniel Lubin: I mean, I think it's a little bit context dependent, right? So during this period, which again, was very high political salience period, I, I don't think it's likely that politics was not driving attention at a high level, certainly that was quite pervasive. I think the concerns that most of the creators have is more the opposite, right? The kinds of for, for, for the apolitical sort of cultural ones where, again, this was not part of interventions, nothing like that, right?

The, it was a non, non nonprofit intervention, but they were like concerned about causing backlash with their audiences. They're concerned about, you know, getting things wrong or being attacked for being wrong, like that kind of a thing. And so a lot of the intervention value is getting them comfortable, getting them, you know, having a capacity to do that well and accurately.

So I think if you imagine re recreating the study in a different context where that wasn't the sort of heightened environment you could imagine, maybe it would look different, but yeah, it's hard to know.

Renée DiResta: So, Nate, your paper also suggests that trust and parasocial connection are central to this. Can you talk a little bit about that maybe? What exactly is it? This comes up quite a lot in, even in academia, honestly, this not, not from the standpoint of how should we be studying it, but just what is that, what is it that thing that creators have that institutions or campaigns or legacy media and news increasingly do not?

Nathaniel Lubin: So I, I think there's two dynamics that we're getting at in the paper that we're thinking about.

One is to, to have the chance to have a persuasion effect of any kind condition for that is that you have to have seen the thing, right? And so that is threshold, that's obvious, but perhaps not always considered, which is why I think a lot of the most important work here for this more cultural dynamic happens because these people who are doing that kind of content creation have large followings and they're commanding attention.

And so if they choose to use that capacity and that power to express something, then they have the, the ability to persuade, right? And so if the audiences of those profiles are very different than the more political groups or other kinds of groups who are maybe to first, you know, first approximation speaking to the choir peak, speaking to people who largely agree with them, their capacity for influence might be larger.

The second dynamic, and I think you were speaking to and the question is particularly for repeated exposures where they sort of have, you know, for, for the, the, the channels we're following is more important or the legacy view. History is more important that repeated exposure might make, make audiences more likely to believe them, more likely to, to take what they're saying on a kind of emotional level seriously.

And depending on what they're saying, that might make what they're saying more credible, may make it more, more believable that what they're saying is not trade picture inaccurate. Again, in this research, it's a little bit hard to know exactly why things are happening. You just see, see the output effects, but again, the kind of indirect, causal things seem to be consistent with what you would see if you were thinking about this as a parasocial connection theory for why the more apolitical creators are effective.

Renée DiResta: There's an interesting set of thinking around why the news account didn't perform as quite so well on Twitter in general, which was that the news accounts didn't ever respond back. Right. And this is, I dunno if you've ever noticed, but they don't reply to you. And so you would have I remember Elon actually gloating about this when he first bought the feed 'cause I study influencers also.

I would look at these differences and how they, how they communicate and you know, just what the, what the styles were and political influencers in particular. And one of the things that you would notice is that. Elon would mock the New York Times about this. Oh, it's got no engagement, meaning it's got these massive follower accounts, but there's no back and forth when it tweets, it doesn't get the same amount of engagement as when, you know, Cat tweets.

And the reason that is in part, I think, is that when the algorithm is waiting responses and you know, when the algorithm is waiting engagement, prior engagement when the algorithm is waiting, likelihood that there is gonna be some back and forth that other people will be able to hop in on, right, to participate in, to make the platform feel like a participatory conversation.

This is one of these areas where I feel like the news accounts in the feed, they're, they're not as well suited to the medium. If the medium is trying to privilege conversation, if the medium is trying to privilege. Social engagement and, and that becomes something that is a real challenge because it does mean then that you're going to be getting your news through the most, you know, charismatic, evocative person as opposed to an organization.

I don't know if you have thoughts on that, but this was something that came up as I was reading.

Philine Widmer: I, I think this is an excellent excellent point. And I don't think, or at least I wouldn't read our study as like engagement maximize, even if like certain kind of changes in political opinions were driven by like engagement, I don't think engagement is bad per se, right? Or a platform that drives, that drives engagement.

And to a certain degree, I think there's also a deeply funny deeply informative content on social media. So there's clearly the potential of this being something fun and informative. I think the question is a bit like when it starts to replace other sources of information for certain types of information, which might by nature be a bit less, less interesting, or less, less engaging.

I think it starts to have effects on, on the democratic debate. And it's a bit the same discussion about fake news, right? Sometimes reality just can't catch up with what you could potentially invent. So in that sense yeah, right now I think we're at this extreme where engagement is really like the bread and butter of these platforms.

And there might be additional like specific kind of interest of like owners or platforms or other, like a big stakeholders. But the engagement maximization is clearly a big, big part of it. Maybe perhaps we're a bit in an equity room where, where there's just too much weight on that because the public information sphere, the public debate.

Engagement is not everything. Right. I think if we thought about it from a more like a, what would be the kind of wish list of, of, of how we would like a public debate going. I don't think that engagement would be the only thing that we would put on that list of the, the ideas that we would have about how a good democratic and debate would, would go on.

And also, this being said, a lot of it is not political, right? And people share names about cats, et cetera. So our concern is specifically about like, how, how are we organizing and this like huge marketplace for ideas that have impact, have an impact on, on politics.

Renée DiResta: So we had these two kind of areas of focus.

We have, what did the machine rank? Then we have kind of, who do I let into my feed repeatedly because I like them. Do you think about this kind, kinda a question for both of you. Do you think these papers point to two different forces or do you think one is upstream of the other? How should we be thinking about the intersection between these, these two things?

Philine Widmer: Yeah. And so I think, I think that at that, at the very beginning, and I think the two papers, they're very complimentary because in some sense. I, I think economists call would be measured sometimes, sort of like a reduced form. So you measure the effect of the algorithm on these opinions. And then the next question is, but how, and so in our paper, we, we do go this extra step and we show that one very plausible hypothesis that we can support by the data is, it's the, it's really the, the accounts that you follow.

It's these contents that you're exposed to. And I think that's exactly what Nate is showing in his paper. Right. If you, if you start following certain types of accounts, they, they will have an influence. There might be some limitations to external validity of this, right? If the account is like super boring or something, you might be asked to follow it, but it wouldn't impact you.

But let's assume it's an account that is like reasonably engaging. If you are more likely to see it, the, the, the, there will be an effect. So I do think taking together, they, they'll help us understand a bit better the mechanics. So these content creators, they matter. You can be, even if you're kind of pushed to follow new accounts through exogenous forces.

In our case, it's the algorithm that you're assigned to by us through the experiment in your case. If I understand it correct, it's a researcher administered like, induced. So, so the researcher induces this, this new following and we see that it really changes and I think it does caution a bit against this like idea that recommend algorithms are just like helping you to find the content that were interested all along. And in that sense, they're politically neutral because they're just showing you what you wanted to see all along. I think both paper, they show that what you consume online. Even if changed, like through random forces, it changes your, your political opinions.

And I think this makes it just much harder to defend this political neutrality of, of algorithms. I think then when we take into account how big these platforms are, we know that they influence your, your opinions just raises some questions about whether we want to, want to stay a bit in this kind of wait and see equilibrium.

I, I, I mean there's a lot of like political action going on, but I think largely it's still like relatively passive. Yeah, I think just these two things taken together for me, they highlight the need for much bigger debate about how we want these platforms to function, especially when it comes to and politically relevant information.

Renée DiResta: Did you see in the argument the Substack newsletter, Lakysha, Lakisha Jain wrote a very interesting essay called “Twitter Is Not Real Life.” Did you, did you happen to see this one go by it? Except for, okay, for listeners who maybe didn't see this one, The Argument is the outlet. They did a series of graphs, series of charts from polling data that they pulled looking at X as a platform.

Now this is a little bit after your study Philine finding that X news consumers are notably more conservative than other platform audiences at this time, right? So they know that, for example, ICE popularity on X is close to break even there, even while they are wildly unpopular on other platforms and overall that Donald Trump enjoys a high popularity rating, still net popular among Twitter news followers.

And the claim is not only that audience is self sort, but also that X under Musk, you know, it, it intersects with your, your point that as X pushes users right word, there is a, he kind of puts it as like a, a vibes problem for people over-indexing on the platform. He's arguing that X at this point has become a conservative platform.

This is reflected in work that I, we saw this at Stanford and an observatory back in the day. We saw like Gab’s user base had gone back to there, right? Truthsocial’s user base had gone back to there because this was a platform where they could do things and get reach that they couldn't on those smaller niche platforms that they had kind of decamped to for a while. They now had a moderation environment favorable to them. And so it was a very interesting argument that he makes.

But I think, so they're sort of markedly different. I would guess is is the, the takeaway than the median voter. But one thing that, that this raises is this question of when we look at platforms today, how should we interpret studies? As you know, I think there's still a bit of a legacy perception. Among many people that platforms are still where everyone is, right? That Facebook is where everyone is, that Twitter is where everyone is.

And increasingly when you actually look at the demographics on the platforms that fracturing has occurred and that is not true. How do you both think about this?

Nathaniel Lubin: I mean, I absolutely agree with that. I think there's also a massive distinction between. The content creator profile or the active engagement profile of users on a platform like X versus the passive consum-, consumption group, right?

And so that is a recipe for real challenges representativeness, when you do a survey context, if you can't balance for that or know the prior population amounts in some independent way. And obviously the platforms don't provide that kind of data. At high fidelity by default, really ever if they ever did.

And so, you know, it's very hard to do an assessment in that way with a credibility. But I absolutely do think in, in much the same way, you know, the TV audience selecting which cable channel to watch. We're seeing a similar pattern in terms of what kind of channel and then what kind of universe within that channel you opt into, right?

YouTube probably is the most pervasive from all, all groups. It's probably the most important channel despite what people would say, but your experience as a recommendation experience can be wildly different depending on what part of the world you're, you're focusing on. And so I think, you know, the kind of power law dynamics of what experiences on the platform is the thing that I would point to there, right?

So there are, and whatever world of, of, of a network you look at. There are gonna be a handful of very large channels or very large influence vectors of, of, you know, whatever the count type is or, or post for the, based on the platform that at any given time are having lots of influence. And so that's sort of a dominant faction.

But then within that, any individual person is having these parallel feed experiences that are really important. And so both are, both are true, right? There are large effects that are having aggregate effects at max scale. And then there's also, you know, individual effects. For any researcher or any person thinking about what's going on, you have to sort of account for, for both that you know, each of those scales happening, which is hard because we don't have an independent reference frame to look at it with.

Renée DiResta: I think the, there's also the argument that there's the elite effects phenomenon that happens on Twitter, right? Which is that there is still a disproportionate number, even as the user base has shifted, right? Even as the influencer base, as you note has shifted, right? The creators are a little bit more skewed, that it does still shape political opinion by virtue of its role as a breaking news, a place that people still tend to gravitate to for big news, the fact that the left hasn't necessarily congealed in any one particular other alternative place.

What do you think about the elite effects argument with Twitter as, as it's continued? You know, salience is a place for, for political discourse, not only in the U.S. I know that this is true in Europe as well.

Nathaniel Lubin: Yeah, I don't know if our study speaks to that. I could say, you know, as someone who spent a couple years of my life with TweetDeck open a hundred percent of the time and felt the pain of that. I still think Twitter has an effect in much the same way for the left that like CNN does, right? I mean, I turn CNN on when there's a election result or a war and pretty much never else. And I think that's kind of the way a lot of consumers now treat X.

Where it's not their day-to-day and there's the daily users who are there, but then it's still a thing that you have on your phone that when there's a moment you wanna see something you might go to. So I think, I think you're right, that at high salience moments it can have an effect. You know, there might be cases where that feed exposure can have an outsized effect in sort of framing a, a debate at a high salient moment.

That said, I would be cautious at over extrapolating because I think we over-index for those moments, STAs relative to all the rest of the time. Right. I think I, I think a commonality of both these papers to the papers is that, you know, there's lots of effects happening all the time everywhere and your exposure to these platforms is sort of a layering effect of lots and lots of different nudges.

And any individual nudge might be very small or might even be close to zero, but you know, some of them are having effects and in aggregate they're having big effects. And so that point, I think is the thing that is what's most important.

Renée DiResta: What do you think about the implications from your respective findings on the what should be done front and do you think that there are policy ramifications that come out of your takeaways here? Do you think that there are media literacy arguments or what, what do you want the either policy makers or regulators to understand about, about what you've, what you've found and what, where do you think platforms should be going?

Philine Widmer: I'll jump in here with a very, kind of a somewhat self-interested comment because doing this study like ours.

It takes a lot of public research money a lot of time, and then we have one study to document the effect of a free algorithm that billions use every day in like one context with 5,000 people. And I don't think this is sustainable given that the impact that these platforms have. I think we need a much better infrastructure to create transparency.

Some steps are doing are being done in this direction with the, the DSA, for example. But there, there's still this like general problem of. Like, first of all who decides which, which kind of questions are audited? There should be some organized procedure of like understanding what are the questions that we want to audit as societies, how can we get the data to audit them?

How can we potentially even like, have some, like an access to the code of these platforms and not just like select the quos of the code that the platforms themself decide to share, but really like, how can we get these insights? Because for us, any study is just like one puzzle piece, right? I think there needs to be some much more systematic, ongoing and monitoring.

There's the question about transparency for users. Like what right do I have to understand what will happen to me if I use the algorithm for a long time? I might just like supposed to kind of figure out as I go or, or should I be able to choose much more? Should I be able to, I dunno, eventually bring along my own recommender algorithm that I can plug in, in the platform.

So I, I think there's many, many shades of this question about transparency and for me, from a research perspective. So I'm, I'm not a policymaker, I'm a researcher. That's one of the things where I see a huge potential to create better knowledge, to create more knowledge that will help us and to, to create much more informed policies.

And not just like the policies per se, but also like the the enforcement instruments, like in practice. How do we make it work? Not just what would we like to regulate it really, and how can we also make this work in practice? How will it, how will it unfold specifically so that people, at the end of the day, they have, they have better platforms.

Nathaniel Lubin: I mean, I would absolutely agree with the transparency point. I mean, in the American context, the Platform Accountability and Transparency Act was sort of the closest that we got to a theory didn't, didn't get over the line clearly for anyone to follow that debate. I mean, I think more broadly, we've been talking in this conversation about political persuasion.

That's not the area that I would be focusing regulation on at all. I would be thinking about, you know, the health effects, given the attention and issues there where there's much more consensus and legacy. From a product liability standpoint, so the, the, the frameworks that we looked at before, I think we should apply to that, which is to say, let's not worry about from a regulatory perspective, inputs that people are creating.

Let's worry about the output effects of exposure. And there are ways to run, you know, randomized control trials and longitudinal assessments for population exposure, particularly for protected classes like kids. And have, do no harm principles associated with that. And you know, if a platform can figure out a way to not cause a problem, that dimension while optimizing for, you know, engagement without any other dimension, then great, good for them. They should do it.

But if they can't, the reason why they should stop is not because engagement feeds are bad per se. It's because they would have an effect on a thing like Hits health. And so we should be focusing on the actual, the actual thing.

Renée DiResta: So holding policy and, and outside and, you know, regulators and all that, holding all of that aside, if you had a magic wand, you know, how would you redesign the feed tomorrow? Or what would you change? Would it be an incentives for creators? The feed itself, the public's ability to choose and understand how both work? Like what do you think is the, the one most impactful thing that that we could be doing here?

Nathaniel Lubin: If I had a magic wand, I would hope that the, you know, product leadership of all of these companies shift their emphasis on their optimization timelines from short term to long term, and include metrics that are associated with societal health rather than individual time. And so societal health to me, things like social trust or human connection metrics that are well grounded in the social science literature.

So if they had a do no harm principle over the long term associated with those kinds of metrics, I think it would be in much better shape.

Philine Widmer: I would like to add a small thing to this, and I very much agree with this one. I would also add that I think many owners of the platforms now, they're AI companies or the same ownership. They also have AI companies, so my magic wand would also, focus the, the calculus of these companies when it comes to social media on, on the actual like, information sphere.

Because one concern one might have now is that if you have a company that is also very, very active in other sectors of, of the economy and in like artificial intelligence and more generally defense, et cetera, I think it's just much harder to understand is there some like, can, can really think of them as some having like some sense of like public service that you might have if, if you're just kind of, active as some like editorial force, like a newspaper.

So I would also like to see more, I mean, we're talking about the magic wand, right? So I don't have to be specific about how this exactly would go, but I, I would like to, to have these things a bit more separated because I feel like increasingly they're getting these kind of huge players that are active in many different sectors and they impact billions of people in like many sectors at the same time.

And I think this makes it very, very hard to keep track of what's actually happening. What are what, what are the real incentives? What is really happening? So I would I would find it helpful if this was it more clearly separated. And in that sense, I think less concentration of of power also.

Renée DiResta: So then I guess to summarize our conversation, what we've gotten to is that what these papers suggest kind of taken together is that online political influence is not just about messages, right? It's about systems and relationships.

What the platform decides to show who people decide to trust and how repeated exposure across both can shape what feels salient, what feels believable, and what feels normal, right? What feels normal among the community. So then the question is not. Whether social media affects politics, it's where that power sits and what it means for a public sphere that is increasingly organized around feeds and creators and that interplay between them. So I wanna thank both of you, Nate and Philine for joining us today. Thank you for really thoughtful conversation. Great to have you both in the Lawfare pod.

Nathaniel Lubin: Thanks for having us.

Renée DiResta: The Lawfare Podcast is produced by the Lawfare Institute. If you wanna support the show and listen ad free, you can become a Lawfare material supporter at lawfaremedia.org/support. Supporters also get access to special events and other bonus content you don't share anywhere else. If you enjoy the podcast, please rate and review us wherever you listen. It really does help.

And be sure to check out our other shows, including Rational Security, Allies, the Aftermath, and escalation, our latest Lawfare presents podcast series about the war in Ukraine. You can also find all of our written work at lawfaremedia.org. The podcast is edited by Jen Patja with audio engineering by Care Shillenn of Goat Rodeo. Our theme song is from ALIBI music, and as always, thanks for listening.


Renée DiResta is an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She is a contributing editor at Lawfare.
Nathaniel Lubin is an RSM Fellow at Harvard’s Berkman Klein Center and creator of https://www.platformaccountability.com, a proposal for technology regulation using experiments. He is the former director of the Office of Digital Strategy at the White House under President Obama and is a member of the Council for Responsible Social Media.
Philine Widmer is an assistant professor at the Paris School of Economics. She is a co-author of a recent Nature paper, “The Political Effects of X’s Feed Algorithm.”
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare