Cybersecurity & Tech Terrorism & Extremism

Lawfare Daily: Deplatforming Works, with David Lazer and Kevin Esterling

Quinta Jurecic, David Lazer, Kevin Esterling, Jen Patja
Thursday, July 25, 2024, 8:00 AM
Discussing the effects of de-platforming users who had promoted misinformation.

Published by The Lawfare Institute
in Cooperation With
Brookings

In the runup to Jan. 6, lies and falsehoods about the supposed theft of the 2020 election ran wild on Twitter. Following the insurrection, the company took action—abruptly banning 70,000 users who had promoted misinformation on the platform. But was this mass deplatforming actually effective in reducing the spread of untruths?

According to a paper recently published in Nature, the answer is yes. Two of the authors, David Lazer of Northeastern University and Kevin Esterling of the University of California, Riverside, joined Lawfare Senior Editor Quinta Jurecic to discuss their findings—and ponder what this means about the influence and responsibility of social media platforms in shaping political discourse.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Kevin Esterling: So we showed there's this direct effect of reduction of misinformation by deplatforming these users who were heavy traffickers and misinformation, and then this spillover effect with less misinformation circulating throughout the system. And then this kind of voluntary withdrawal of some of the other worst offenders leaving the platform themselves.

Quinta Jurecic: It's the Lawfare Podcast. I'm Quinta Jurecic, senior editor, joined today by two political scientists, David Lazer of Northeastern University and Kevin Esterling of the University of California, Riverside.

David Lazer: That would suggest actually more generally that for many platforms, deplatforming would be effective, that is, our result isn't just about, you know, Twitter, circa January 2021 and what would work then. But I actually think that it, it's, it suggests a more general tool.

Quinta Jurecic: They have a new paper in Nature studying one of the most striking examples of rapid widespread content moderation in recent years: Twitter's decision after January 6 to remove a huge amount of accounts promoting election misinformation. David, Kevin, and their co-authors found that this deplatforming successfully limited the quantity of falsehoods on the platform.

[Main Podcast]

I had reached out to you to talk about this really thought provoking paper that you published recently in nature about the effect of Twitter's removal of a significant number of accounts following January 6th, 2021. So just to kind of, set the scene a little bit, can you tell our listeners a bit about what you were studying here?

Kevin Esterling: Usually when social media companies take action against someone who's violating their terms of service, they do it in a really ad hoc way where they will, maybe with an individual user, they'll maybe give them a warning or suspend them briefly, right. But it's this very ad hoc kind of case by case way, that that's not really it, it's, there's, there isn't a way to kind of, study the effect of deplatforming at scale.

And so what happened after January 6th was Twitter, deplatformed 70,000 users who were heavy traffickers and misinformation. And so that, that event gave us an opportunity to look at, well, what happens when a social media company undertakes an intervention to address the problem of misinformation on its platform?

Quinta Jurecic: David, anything you wanna add to that?

David Lazer: I think that that's a great summary. You know, essentially, we wanted to see what was the effects of deplatforming on the amount of misinformation on Twitter. And we found, we found the effects were considerable.

Quinta Jurecic: Yeah, I found your findings really striking because just, you know, as someone who was on Twitter, on January 6th and immediately afterward, it was, you know, as, just as a matter of my own experience, really striking how different the platform seemed after, the leadership sort of brought the hammer down and removed, as you said, tens of thousands of accounts following January 6th. It certainly, it felt different.

But you know, it feeling different is a, obviously a very different thing than actually showing that, especially given the, the difficulty of doing this kind of research. So at a high level, maybe we can talk about what specifically you found. And then I also wanna talk a little bit more about, you know, what, what sparked your interest in this before we dive into really the nitty gritty of the data.

Kevin Esterling: Well, let me say really quickly what we found. We found really three different things, and this might help to explain how you experienced, Twitter differently, even though you, yourself are not a trafficker in misinformation, presumably, as far as you know.

Quinta Jurecic: Yeah, yeah.

Kevin Esterling: We'll have to follow you and find out. But the, the first thing we were able to show is, is obviously the people who were deplatformed were, were very heavy traffickers in misinformation leading up to January 6th. And then, and then obviously once they were deplatformed, they were completely removed as a, a source of misinformation in the, in the platform.

And so that's kind of an obvious effect, right? That there's less misinformation when you take the worst offenders off. But then what we were also able to show is that there was a spillover effect. Which is what you might've experienced, which is once you take the, a really large set of misinformation traffickers out of the system, that just reduces the volume of misinformation that's circulating on the platform.

And what we showed in particular was that the, the users who followed the deplatformed users also had a sharp decline in spreading misinformation after the intervention, even though they themselves were not a deplatform. And so we call that a spillover effect, right? And you can think that that's sort of showing that the, the deplatforming sort of percolated then through the system where there's just less misinformation spreading overall.

And then the third thing that we found was that the, the the users who were not deplatformed, but who were the heaviest, traffickers in misinformation, which, David's group calls the misinformation super shares, as well as, users who frequently trafficked in QAnon content. Even though they themselves were not deplatformed, we showed that after the intervention that they voluntarily left the platform.

And I think, you know, there was dis we don't, we don't know where they went, but there was discussion at the time that they moved to other kind of alt-right, platforms. Because they, presumably because they found Twitter to be less of a conducive environment for their activities. So those are the basic three things, right?

So we showed there's this direct effect of reduction of misinformation by deplatforming these users who were heavy traffickers and misinformation, and then the spillover effect with less misinformation circulating throughout the system, and then this kind of voluntary withdrawal of some of the other worst offenders leaving the platform themselves.

David Lazer: And I'd add just a, a couple of things for context. The first thing is, and this is something we expected, but that the, the amount of misinformation is really concentrated in a very small number of people. So on the one hand we were saying 70,000 people were deplatformed. Well, you know, there are a lot of people on, on Twitter, so that's actually a very tiny percentage of Twitter.

And so part of the interesting thing was could, an enforcement action in such a tiny fraction of Twitter actually have sizable effects overall. And the answer is yes. And, and this in part, builds on a, there was an earlier paper actually from my lab, led by nir Greenberg that appeared in Science.

And it was looking at super sharing of misinformation, on Twitter. And that one we found that, the bulk of misinformation seemed to be coming from roughly one in a thousand accounts. And, and that's why, deplatforming, a small percentage of people could actually have a very large effect on misinformation. It's not like everyone is sharing misinformation. It's really largely coming from a, a small number of people.

The second thing that, that we have in the paper, it's actually in the sporting materials, but sort of interesting is, is the demographic characteristics of the people who share misinformation. It is consistent with other research. It did find that it did, it disproportionately comes from older, older people and Republicans. What was a bit of a surprise was that it mostly came from women, actually, so it was a, a largely older Republican women that accounted for, that who are deplatform and who also accounted for the bulk of misinformation.

Kevin Esterling: Well, that's not something that, you normally you think of these misinformation traffickers as these dudes sitting in their basement, right. But it's, it's interesting and we, we actually had a discussion and it surprised the team when we found that out.

But then we thought about kind of the prominent politicians who themselves, traffic and misinformation. Many of them are, many of the prominent ones are women. And so it's, it's something we don't really, think about. But then if you, but then it, it kind of made sense to us once we, once we thought a bit more about it. Yeah.

David Lazer: I, I'll actually, I, I'll actually say I, I had predicted this, I had actually a wager with someone, who's now a, a, a colleague, but was on the Greenberg paper, Briony Swire-Thompson.

Actually, I still have to collect from her, but it's not fair. She's an assistant professor now, so, I can't do that. But, but, and whether, whether would be, most of the, super shares would be women or not because actually in our earlier paper where we found one in a thousand were accounted for most of the misinformation.

We just didn't, we had a much smaller sample of super shares and we only had 16 of them accounting for most, for the bulk of misinformation out of 16,000 people. But 12 of the 16 were women. And the question was, was that just a statistical fluke, a small and fluke, or whether that reflected something deeper?

And it seems like it reflects something deeper about where misinformation is coming from.

Quinta Jurecic: Yeah, it's a really, it's a fascinating finding and it definitely jumped out at me right away. So, so I wanna talk a little bit about, you know, what motivated you to conduct this research and also how you conducted it, when you're, you know, kind of looking at what happens in January 6th, looking at, you know, what the, the lay of the land looks like in terms of the research that's out there. What made you think that this might be an interesting study?

Kevin Esterling: Well, well, I'll say that the study originated when. I got an email from David, with a graph that ended up being essentially what's, what's now figure one of the paper, which is just the time series of misinformation, kind of the rate of misinformation sharing over the 2020 election cycle where there was just plainly after January 6th, just a, a very stark drop.

There is essentially a 50% drop in misinformation that that day. And so David said, well, do you think there's a paper here? And I, I said, I think there is, but then it's, it's in, you know? Right. And, and, and obviously, you know, David's lab, I mean, David will talk about this, but David's lab has long, I mean, it's, it's the premier lab on studying disinformation. I, I can say that. And so it's, it's, it, it fits with what they're the kind of, the capacity and expertise of their lab. It's a really important, problem for democracy. It just, as I mentioned at the outset, it really presented an opportunity to study, this important problem, right?

Where we actually get to see the counterfactual of what it looks like when a social media platform actually does something about misinformation because, while they do, curate content and they do enforce their terms of use, they do it in this kind of unseen, hidden way. And so this was the one opportunity where we could see, what in social science we call the counterfactual of what does the world look like when something different happens?

And and so it was really a great opportunity to, to do the studying.

David Lazer: Yeah, I think, you know, deplatforming as Kevin, noted earlier, deplatforming is one of the tools that well platforms have. And the question is, is how, how efficacious would it be? And so this was, this does did present, as Kevin was saying, an opportunity to think about.

You know, what, what happens, what could have happened? And we happened to be, my lab happened to be in the field, during that time. And so that was the other thing was that you, you couldn't have done this study after the fact and said, let's go out and get data. But we, we happened to be in the field getting data and that's what really facilitated the everything else we did.

It, it, it, it turned out to be a journey that is, you know. It turned out to be a much longer journey, honestly, than, than what Kevin and I envisioned in the beginning because there just ended up being a lot of interesting nuances and complexities, but it was all with data we had already collected. And that's really what made it, made it possible to do this.

And it would not be possible if, you know. I don't know what will happen January 6th or whatever the date is, in 2025. But, we could not replicate it because, because the, the Twitter has turned or X has turned the knobs off on data collection. So whatever they do, whatever they do in terms of our democracy in our society is really not gonna be visible.

Quinta Jurecic: I definitely wanted to ask about the sort of data collection aspect precisely because as you say, you know, anytime that you talk to researchers who work on these platforms, one of the things you hear again and again is just how hard it can be to get ahold of the data and also how that, you know.

Whatever trickle there was out of that spigot has really been shut off in the last year or so. So, I mean, what, what was your experience like in getting a hold of that data and why, you know, in more detail, why is it that that will essentially be impossible going forward, at least if, if things stay the way they are?

David Lazer: Well, researchers were, Twitter was one of the primary, sources of data, for studying, the internet. And it, a lot of it had to do with the, Twitter's APIs. APIs are basically the automated ways of communicating with a platform and, potentially extracting data. And, and Twitter had very generous, ways of pulling, data out through their APIs.

They shut that down in the spring of 2023. So a little over a year ago. And so what that has meant is that much of the research on, on done, just misinformation, but just on how. The internet, what's circulating on the internet and how it circulates and so on, shut down with that, with those APIs.

I think, I think this does reflect, and it is not just Twitter, but it's also, you know, other platforms like Reddit and so on have also made it, more difficult to impossible, to extract data. So I do think that we are in a kind of scientific and societal crisis that the visibility of what's happening and understanding what the platforms are doing has been radically reduced.

And it's not like it was so great before. I, I think that, you know, in a way we, the, the, the only, thing that I would, I would describe as a sort of silver lining in this data crisis is that there was a way in which the research community was willing to take these scraps from Twitter. But in fact, Twitter is just one small corner of a vast system.

And then we only saw a small corner of that small corner, because we could see what people share, but we can't see what people see. And that's actually what we care the most about. And, and this is something I, I'm actually on the board of something called the Coalition for Independent Tech Researchers and, and really trying to help, you know, enable researchers, who are doing, research on the platforms or, you know, independent of the platforms and so on.

But we are, we are truly in a crisis moment for understanding what is happening on the internet and what its consequences are.

Kevin Esterling: And, and pointing out as well that many of the people who are deplatformed after January 6th are now have apparently been replatformed on X and we have no idea. We don't have any idea what's happening.

Quinta Jurecic: So with, with that sort of ominous, note in, in the back of our minds, which I do think is important to, to keep in mind, let's dig into some of the, the nuances that you found in the data. So there, there are a lot that I would definitely love to talk about, but I wanna turn it over to you first. I mean, what did you find that surprised you or what were sort of nuances that you weren't expecting?

Kevin Esterling: Well, I think before we talk about the findings and the nuances, it makes a, I think it might make sense for us to say something about the data that we use in order to draw the conclusions. And so, David's better equipped to describe this, but let me try.

So David's lab has long had had a connection to the Twitter API, where they were routinely, you know, had a access to the API and what they, they did, years ago was they set up a panel of Twitter users that they followed and the panel was, was where they individually matched Twitter users to voter records.

And so they could, they knew the identities of the Twitter users, knew that they were people and not, bots or organizations. Right. Which is, which is, you know, I mean that which are also users on Twitter. But then they also were able, they knew something about the users and so that's something, it was one of the great innovations of David's lab is to do this matching, Twitter data to administrative records so that we could not know, not just know what the content was, but who the people were that were spreading it.

The other thing that David's lab, innovated is I think innovated is, how we've measured, misinformation. And so that's a tough concept, right? And so instead of, what David's lab does, instead of looking at each content and trying to assess right, it's information value, they have a more objective measure, which is they look at Tweets that contain a URL and if that u and then they, they look to see if that URL points to a website that appears on, on various lists of, websites that have that disseminate misinformation. And essentially each, each of these lists has a different way of classifying websites as having unreliable information.

But, but essentially what each of them do is they look to see if the website puts out content that looks like news, but then when you look underneath, there's no reporting or editorial processes. And so what David's lab, what we, what we do in the papers, we aren't looking at deciding if each individual tweet is true or false.

We simply look at does that URL point to one of these websites that. Is on a list of, of misinformation websites. And so that's how we, we measure it, right? So we have, and so we have a panel of users and we know they're, we know who they are, we know the misinformation. We have a measure of the misinformation content of their posts, and then we have a timestamp for every post, so we know exactly on a given day, right?

The rates of misinformation sharing. Another thing that was lucky was in the way that Twitter API worked was that we. It can actually detect, based on the, the what, what the API returns if a, an account was deplatformed. So there was actually a direct measure of that. And so those are all the ingredients we needed in order to be able to, to analyze.

Right. What's the effect of the intervention at a given time on the users that were, on the platform at the time?

David Lazer: And, I'll just note, and I'm, I'm, I'm happy, happy to take claims, for my lab, being innovative, but I should, we should give, I should note that others have like also built on a list-based approach of misinformation sites and that we also use in, in particular, the work.

A, a list, developed by News Guard or was based on News Guard data and we, I just wanna give credit for their work. We combined, News Guard and I had mentioned that earlier, paper in Science by Greenberg and others. And, we had a list that we had developed for that, which was basically looking at repeat offenders of, of domains that repeatedly produced, misinformation, basically, as, as Kevin was saying, we were making the inference that they had, no, they had, they, these were not serious journalistic outfits.

They were making stuff up and, you know, because some stuff was, it was fact checked as sort of outrageously false, that we could infer that their underlying process was not to produce accurate content.

And so, and so, you know, that, that, that is useful in the sense that it allows us to, it simplifies the analysis a lot. If you start with a list of domains, if we had infinite capacity, we might, well do something like, evaluate the truth, value of every tweet, but we might as well go out to the, ocean and, and collect the Atlantic with a tablespoon and glass.

It, it, you know, it's not, there's no way to do that. But, and not at scale. So, but there's a virtue in both approach. There's a virtue if you could code some things as true false, but then there's also a virtue if you have a list of domains or sources and saying, these are really low. Maybe they're even saying something that's not technically false, but it's something that's just low quality, right? It's a signal of like the really low quality content.

Kevin Esterling: What's interesting is that we get the same results no matter which list we use. And so the lists are measuring, right, the quality of these websites in different ways. And we get the, so it's not like there's, sort of cherrypicking or anything. It's sort of, no matter how you look at it, you get the same results.

And so we, as I mentioned, we had kind of this three part, strategy. So. You know, right away, right when, when David shared that the, the figure with me that showed the steep drop in misinformation in the system as a whole, after January 6th. We thought that was important, but there's a, a really big problem, with doing, an inference about, well, what was the effect of the deplatforming itself on January 6th?

Because, Twitter didn't choose just a random day. They didn't spin a dial and choose a random day to do deplatforming. They deplatformed on the, on the exact same day or the day after one, one of the most significant events in, in, in all of American history, that was also a supercharged media event.

And so we had this, the study has this problem that there's what we call a confound, right, that Twitter did, its deplatforming, right? But they de they had the, they did this intervention exactly at the same time that a real, a really large, real world event happened. And so the problem with just the straight com sort of pre-post comparison is we, we weren't able to say right, right out of the gate whether the drop in misinformation that we observed was because Twitter had done its intervention, or if or if that drop could have occurred even without Twitter doing anything, because people might share misinformation and share truthful information differently after an an event of that scale.

And, and also we know, you know, Joe, Joe Biden was certified as the president-elect at in that same time period. Right. Which again, just might have changed the motivation for people to share misinformation. So we really don't know what, what would've happened had Twitter not done its intervention.

And so we tried to tackle that and that problem of confounding in a couple of different ways. So one is we, we looked specifically at. Just what the drop was among the people who were themselves deplatformed. And so, as I mentioned, we, we, we, we knew right, be prior to their deplatforming that the 70,000 users who are deplatformed were very heavy traffickers and misinformation.

They were among the super shares. And then after the intervention, obviously their misinformation trafficking went to zero because they were deplatformed. Right. But we, but the problem is, because of that confound, we can't say exactly how much of that drop is due to the intervention because we just don't know what they would've done had they not been deplatformed.

Right. So maybe they would've shared even more misinformation or maybe less and we just don't know. Right. But the one thing we say in the paper is that. If you assume that the, these users, were to share any misinformation after January 6th, right? Then we can say for sure that the, the, the effect of the intervention on, on them was greater than zero.

Right, that there was, that there was some effect. We just can't say the magnitude of the effect, but we can say we, I mean, I think it's a pretty mild assumption to assume they would've shared some misinformation and that goes to zero.

Quinta Jurecic: And, and is it fair to say, I mean, especially because a significant number of those people had previously been sharing enormous volumes of misinformation? Yeah,

Kevin Esterling: so these are, you know, the people that we're talking about are, as David mentioned, the, these are people that really kind of, their accounts are for essentially weaponizing misinformation. So they're, they're on it, they're unlikely to be people that just happen to be misinformed. They're, they're doing it kind of purposefully.

Okay. And so, so that's good, but that's kind of an obvious result, right? That, that, right. The people who are deplatformed no longer shared misinformation. So what we did as the second step was we looked at, people who had a history of trafficking and misinformation. Over the previous, during that election cycle, we took that subset of users, the people who are sort of at risk of sharing misinformation, and we divided them into two groups.

So these are people, none of whom were themselves deplatformed, but they had a history of trafficking and misinformation. And we divided these, these users into two groups. The users. Right. These users who had a history of trafficking and misinformation, who followed deplatformed users, right? And then the one, the those who had trafficked and misinformation but did not follow the deplatformed users.

And so what, because the reason that we decided to look at that is then obviously for those users who followed the deplatformed users, the deplatformed users were kind of a ready source of misinformation in their Twitter feeds that were. Right. That made it easy for them to recirculate misinformation 'cause they were being fed this misinformation by the users who eventually were deplatformed.

Right? So, so the ones who were following the deplatformed users all of a sudden found less misinformation in their feed because of, because of the deplatformed users being, removed, right? So they had, they, they were essentially, indirectly affected by the intervention. Whereas the people who didn't follow the deplatform users, were, were not directly affected in that way.

And so we, we were, we sort of used these, sets of users as a good comparison group, right, to think about, well, what's the effect of the deplatforming on the remaining users? And the reason that was really useful for our paper is, it really helps us to address that big confound of, well, what's the, the separate effect of these real world events.

Because both of those sets of users were equally exposed to January 6th and so, but right, but then one, so they both had the same kind of exposure to the confound, but one was affected by the deplatforming and the other group wasn't as much. And so what we show using that study is that the users who, had followed the misinformation traffickers re did reduce their level of misinformation compared to those who, who did not follow them.

And so what that shows is there was this, what we call a spillover effect or a, a knock on effect, right? That removing the misinformation traffickers. Removed in misinformation from the followers feeds. And so they, they themselves trafficked less than misinformation in this kind of spillover effect.

And then, as I mentioned, the last thing we just showed quickly is that just descriptively the, the people that David's mentioning, the super shares, the kind of 1% of the 1%, and also people who trafficked in Q Anon, content that we were able to, to show that even those, those users who were of those types, but not deplatformed voluntarily left the platform, right? And so they would've left with them, they would've taken their, their misinformation with them as well.

David Lazer: And just to sort of give a, you know, the, the sort of very, quick summary. We, we found, we both found that there was a sharp discontinuity, a sharp drop, after the deplatforming.

And then we showed that there, there was sort of a, there's sort of a contagion of misinformation and when you remove some of the sources, it reduces the contagion. And so those are sort of, and those two fit together very nicely in terms of capturing the broader effect. There was a direct effect of deplatforming, but then there was also a contagion or lack of contagion, that resulted from, the deplatforming.

And then, as, as Kevin notes, there just seemed to be an exit, of people. And, and there's, you know, a way this sort of resonates. There's a classic bit of, of a classic pizza, I guess political philosophy by Albert, Albert Hirschman, exit Voice and Loyalty, which is one of my favorite, books ever.

But it talks about the, the role in systems of, of actors have the choice to do you, do you stick around? Do you, do you voice if you have issues, do you voice your objections? And so on. And there's a way in which we can think of Twitter forcing exit by deplatforming. And that this then also reduced the voice by others like them, because they were in the nature of Twitter and it's sort of, its affordances was, is, you know, most of the content you see is reshared is retweeted.

And then a lot of those decided, in terms of when we think about loyalty, which is do you stick in the system if you're not happy with it? And a lot of them decided they weren't happy with Twitter anymore and they exited. And and so there's sort of this neat, neat trifecta in, in terms of our set of results.

Kevin Esterling: Although again, we don't know now going forward, right? Where that loyalty, right, whether there's kind of reentry and loyalty, we just don't know. We have no idea what's happening on it now. Yeah.

David Lazer: Yeah, no, that's an interesting question of like what's happened now. I do think that my, my intuition is that history matters that a lot of people sort of coalesced, in and around other platforms and, I think there may be an ironic effect of the deplatforming of sort of making, you know, of making a few billion dollars for Donald Trump.

Because part of the reason why true social has some viability is that that was, that was a logical destination for many of the people who were deplatformed who, or who exited that that was a sort of friendlier, environment.

And that, that then gets to like what we, we can't, it's hard for us to speak to what the more general systemic effects are from deplatforming. That is the world, Twitter is not the world even the internet isn't the world. But if we're even trying to understand the effects of Twitter's decisions in terms of deplatforming it.

It's ambiguous because it, it, it's not like these, it's not like this misinformation disappeared. It just sort of went to different, different corners. Now it might, and when we're talking about the contagion effects, it may have reduced incidental exposure, to misinformation. Because if you take, if you deplatform people from more general platforms like Twitter or, or Facebook, even more so.

Then it reduces the opportunities for incidental, exposure to misinformation. And then that leads to the question of like, what are the effects of that, which our study doesn't really speak to.

Quinta Jurecic: Right. I actually wanted to ask about that. 'cause I think it, it speaks to some of the complexity perhaps of doing this kind of research that, you know.

The nature of a deplatforming is that it's going to affect things across multiple platforms because people leave one and either go to another or don't. But that's actually quite hard, I would imagine, to study, because in this case, you're getting your data from Twitter. And presumably, I know the donald.win was a, a site where some Twitter users who were removed from Twitter went after, January 6th.

There are others, you know. I don't think that donald.win had an open API. But you know, as, as researchers, it, I would imagine it's actually quite difficult to sort of figure out how things are interacting. Cross platform. Is that sort of just a inherent difficulty of doing this kind of work?

Kevin Esterling: It, it's extremely hard because the use, it's hard to track user accounts across platforms. I have some colleagues here at at UCR who are working on that as a kind of a, a computer science problem of how you identify, right, users that have different account names that you can kind of use clues right across, across different accounts, both in terms of the content and in terms of the account level information.

But it's, it's extremely hard to do. Yeah. So it's, it's a, it's kind of an important computer science problem.

David Lazer: Yeah, I think there have been some efforts at studying migration among platforms and I think, but the vast bulk of research has been single platform because the startup costs of evaluating, a platform are usually huge.

And so the notion of studying many people across many platforms is, is hard. So it, it is, not to say it's impossible, but it is hard and it is, it, it, I've seen some work floating around, but it is, it's rare. And I think one of the things we need more of is multi-platform research to capture the sort of this broader complexity.

But we're, we're actually going the opposite direction. We're sort of going to zero platform, research. So, you know, we are sort of entering, the dark ages, in terms of understanding this stuff. Unfortunately. I mean, I think there are some forces pushing against, I should note. But but I do think that, that the platforms generally have made it harder.

And so setting up something that allows you to see multiple platforms in their interplay, which is like the internet, like that's the whole thing, right? Is it, is like life is not within Facebook or Twitter. It's the interplay. And so, I mean, we had a paper some years ago. That was essentially arguing, that, that Google might be like, incidentally amplifying Donald Trump on Twitter because like Google searches would, would identify tweets, right?

And that would show tweets as results. And that it often in our audits, was surfacing tweets by Donald Trump. And Google actually is much bigger than Twitter. So to what extent, let's say, was the engagement with Trump content on Twitter being driven by Google surfacing tweets when people searched. And that, that's just a very tiny example of multi-platform research and how, like what happens in one platform, especially when it's a big platform and a smaller platform.

You could imagine that what Google does, especially Google is such like, such the, the central infrastructure, the internet, that what Google does, you know, tweaks its algorithm and then suddenly there are all these incredible changes to Twitter that they don't even understand where it's coming from.

Quinta Jurecic: So of course we're having this conversation at a time where we're, you know, heading ever closer to the 2024 general election. Many platforms, including Twitter under Elon Musk, have really significantly rolled back their content moderation structures, both for sort of economic grounds, their cutting costs, and in Musk's case, I think sort of explicitly ideological grounds.

There's a real lack of willingness to engage in these kinds of aggressive interventions. So this I think frames kind of a, a normative question, which we haven't really touched on until now. If your paper shows that, you know, deplatforming can really have an effect in decreasing the spread of bad information, I think the question is then, you know, should platforms be using that power, they seem to be inclined not to do so.

But I'm curious, after conducting this research, what the both of you think about that issue?

Kevin Esterling: So, so just to recap, we do, so what we, we show is that deplatforming has an effect. It works. So if you wanna reduce misinformation, trafficking deplatforming does it, and we talk in the conclusion, it's in contrast to a related study that another group did that looked at content removal, which is another strategy that.

Social media sites have that, that had no effect. And so really what we showed is right, that that kind of removing content is sort of this whack-a-mole sort of approach to content moderation. And so, really what we show is deplatforming does work, but our, we don't say in the paper whether social media companies ought to, and in fact, David and I have not discussed this before.

So this is our first time, having this conversation. So this, I'll be interested to hear what David says, but let me, let me take the first cut.

So in democratic theory, falsehood plays a really important role and, and the way I think about it is, the framers are framers, you know, for all of their personal flaws were really very good democratic theorists. And one of the core, beliefs that they had is that falsehood isn't just something that you have to, that a democracy has to live with, but it actually plays an important role in the sense that, and this is in their, their, their telling is that that falsehood plays a positive role because it's through falsehood that you can know what's true. Right?

So when falsehood and truth compete, they had a, this belief, this kind of theory of the mind that we would now call a cognitive theory. That there's just something about our information processing that humans are able to tell something that's true from something that's false.

And they also said that if humans could not do that, then you can't justify having a democracy. So, so you have to believe that humans can separate truth from falsehood or you have no reason to justify ha a democracy. And so, but the issue is that the, the, the, you know, the framers and much of the democratic theory that's been handed down to us was written by people who, who quite literally used quill pens to communicate and so they had no way to envision a world where misinformation is weaponized at the, the scale that it is today. Right? They just didn't envision that. And it's, it's really an open question of like, well, what do we, how do we, how would, how do you think about it from democratic theory?

And I think that my, I've, I've kind of gone back and forth on this, but the, now that the more I think about it, the more I think that. Because of the scale, right. Of and the changes from technology with DeepFakes and AI, the, and then just the GE generally speaking, the weaponization of misinformation at that and the, the technology's capacity to have that weaponized at scale.

Really, to me now I've, I'm beginning to think more that this is more like the shouting fire in a crowded theater, approach. Right. That they're, that it's not the same thing as just. Somebody being misinformed when they say something, but it's more of that kind of shouting a fire in a shouting fire in a, a crowded theater.

But I would say that if it's the case that technology is evolving to where we really cannot tell truth from falsehood. If, if that's where we're going with AI and DeepFakes and all of this stuff, then it really makes me worry about whether we actually can sustain a democracy, because I don't think there's a way for democracy to function.

If it's, there's some central arbiter deciding what's true and false and deciding who gets deplatformed. And so I think that that's kind of the problem, right, is if, if we can navigate our way forward with the current technology where we really can still separate truth from falsehood. Then we're okay and, and maybe ha you know, rely on social media platforms to kind of deplatform the, the, the users who are really just abusing right and weaponizing misinformation.

But if it's getting to the point where ordinary users simply can't tell the difference, then I worry about whether we even can, that we can sustain a democracy or even in indeed justify it. So that's, I don't know, David, that's. That's what I, that's what I've been thinking about after we wrote our papers.

I don't know what you think.

David Lazer: Well, I may be, a notch more optimistic. So good. I, I, I, you know, I actually don't think, I don't know how well the median voter in. Say 1800 was informed, right? There wasn't, we didn't have the, the, in the institutions, that we have now for people to get informed. And of course our, our, our, our democracy.

And I'm, I'm doing quote fingers here 'cause you can't, you can't hear the quote fingers, but the democracy was barely a democracy back in 1800. But even those who had the franchise were probably often misinformed. And of course the media at the time was partisan media that was full of, propaganda and misinformation.

So I think the opportunities to become well-informed today are. Vastly better now, than they were a few hundred years ago. You know, I, I also think, I've, I've, I've, I've, I'm, of multiple minds in terms of thinking about misinformation, right? I think that, that, and how much of a contemporary issue this is, I think it's probably been a timeless one.

I think the issue of, of getting quality information and, and some of it purposely trying to fool you, has been always, a thing. There are ways in which, things surface more. Like, I think that there was a way in which, like I'm, I'm trying to remember now, you know, stories about political campaigns from decades past pre-internet.

You know, early mass media, but where you put flyers in everyone, you know, everyone's cars with like this candidate molested a child or something as a communist and so on. And it would just, you know, there would be no chance for refutation. There would be no chance for searching for, is this true? So, you know, I think, I think actually, you know, you know, the, the, the question is misinformation is a problem, but is it more than a niche problem?

And I think the, and, and a niche problem can still be a big problem if you have, let's say a mob show up. You know, at the capitol that niche matters. And of course there are lots of people who are misinformed, about things. And so, but they may be misinformed not because of misinformation. They may be misinformed because they don't trust, institutions and they would be, there's plenty of, good information out there and they don't avail themselves of it because they don't trust the, the, the more, the higher quality sources of information.

And so, so I think, you know, I'm not sure that's an optimistic take, but it's, it is a more optimistic take, than, than Kevin's. I think in terms of what we found, we certainly, we certainly show that, deplatforming, at least with, in this case, was effective. And I, my intuition is that actually that that result would generalize because I think that there are multiple.

Places there's research suggesting really more generally that the people who share misinformation are really quite small. And you have like, let's say a vi, a tiny but vigorous minority that's, that's pumping out misinformation. And so that would suggest actually more generally that for many platforms, deplatforming would be effective.

That is our result isn't just about, you know, Twitter, circa January, 2021. And what would work then? But I actually think that it, it's, it suggests a more general tool. I, I do worry. And this, here, I I, I certainly agree with Kevin of the notion of sort of centralized filters on all stuff. I, and we certainly, we, we've, we have the sort of paradox of the moment where, where things are both centralized and decentralized.

Like you can get, anyone can throw stuff into the ecosystem. But it's all mediated by a small number of platforms, right? Twitter, Twitter is a small one. Meta and, and Alphabet are really the, the two giants that, you know, mediate so much of what we see on the internet, not just in the US but everywhere.

And that is. That's a worrisome state of affairs. And when we talk about deplatforming, even, even when we say it's effective, I find it, it, it's a worrisome concept of saying, that these companies should be shutting down the visibility of certain voices and so on. And I'm not, and I, I'm not saying deplatforming should not occur because I think there's definitely cases where it should be, but like, it, it, the fact that that's sort of the, the modality of control, or one of the key modalities of control is like the small number of corporate actors that literally control person interpersonal communication of various kinds.

Is a scary state of affairs. And, I don't really have the answer, you know, within the realm of like, what's possible. I could have a utopian, scenario, I suppose, but I'm, I'm not gonna use up our scarce airtime, for that. So that is, you know, a somewhat, complex and ambivalent answer, I'm afraid.

Kevin Esterling: Yeah, Quinta, if, if there were easy answers, David and I would tell you that, but they're just this, these are really tough questions, but it's just important for us as a society, right, to be engaged and to figure out how to navigate forward as technology changes.

Quinta Jurecic: Yeah, I mean, I think that I, that's absolutely right. I certainly don't have the answer either. I mean, I, I do think that what's interesting to me about your paper in particular is that, you know, at least in my space, I've had so many conversations with people over the last few years about these exact questions, and what's frustrating about them is not only that there aren't obvious answers.

But also that we're kind of guessing at whether or not any of the interventions that people are proposing would be effective anyway. In, you know, in the absence of any data about how platforms work or how users behave. And so, at the very least, I feel like part of what you've been able to do here, perhaps, is answer sort of the empirical question of whether it works.

I don't know if that helps us answer the question of whether it should be used, but at least we have, you know, we're groping towards some sense of, you know, what the options on the table actually do.

David Lazer: Yeah. And that's really what we were aspiring to, right? I mean, we, we, you know, there, there's the question of, I'm, I'm not a big fan of the term follow the science.

Because, because I, because I think, you know, the science, science should investigate things that our values, determine are important. But ultimately the science is the science and our values should determine, our actions and our values are not a scientific affair.

And they're collective non-scientific affair. But you know, here we're just trying to illuminate what, what works in what ways. And our hope was to illuminate part of this. We don't even know some of the other ripple effects on other platforms and so on, but we at least could, could illuminate what effect did Twitter's deplatforming have on Twitter?

Quinta Jurecic: Yeah. Thank you so much for coming on.

Kevin Esterling: Quinta, we're really grateful that you had us on your podcast and this has just been a great experience.

David Lazer: A wonderful discussion, really. So thank you, Quinta.

Quinta Jurecic: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support, you'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. And check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from ALIBI Music. As always, thanks for listening.


Quinta Jurecic is a staff writer at The Atlantic. She was previously a fellow in governance studies at the Brookings Institution and a senior editor at Lawfare.
David Lazer is a political science and computer sciences professor at Northeastern University and co-director of the NULab for Digital Humanities and Computational Social Science.
Kevin Esterling is a professor of public policy and political science, chair of political science, and the Director of the Laboratory for Technology, Communication and Democracy at the University of California, Riverside.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare