Cybersecurity & Tech

How Misinformation Spreads on Social Media—And What To Do About It

Chris Meserole
Wednesday, May 9, 2018, 10:00 AM

Opportunities to glimpse misinformation in action are fairly rare. But after the recent attack in Toronto, a journalist on Twitter unwittingly carried out a natural experiment that shows how quickly “fake news” can spread.

Published by The Lawfare Institute
in Cooperation With
Brookings

“We take misinformation seriously,” Facebook CEO Mark Zuckerberg wrote just weeks after the 2016 election. In the year since, the question of how to counteract the damage done by “fake news” has become a pressing issue both for technology companies and governments across the globe.

Yet as widespread as the problem is, opportunities to glimpse misinformation in action are fairly rare.  Most users who generate misinformation do not share accurate information too, so it can be difficult to tease out the effect of misinformation itself. For example, when President Trump shares misinformation on Twitter, his tweets tend to go viral. But they may not be going viral because of the misinformation: All those retweets may instead owe to the popularity of Trump’s account, or the fact that he writes about politically charged subjects. Without a corresponding set of accurate tweets from Trump, there’s no way of knowing what role misinformation is playing.

For researchers, isolating the effect of misinformation is thus extremely challenging. It’s not often that a user will share both accurate and inaccurate information about the same event, and at nearly the same time.

Yet shortly after the recent attack in Toronto, that is exactly what a CBC journalist did. In the chaotic aftermath of the attack, Natasha Fatah published two competing eyewitness accounts: one (wrongly, as it turned out) identifying the attacker as “angry” and “Middle Eastern,” and another correctly identifying him as “white.” 

Fatah’s tweets are by no means definitive, but they do represent a natural experiment of sorts. And the results show just how fast misinformation can travel. As the graphic below illustrates, the initial tweet—which wrongly identified the attacker as Middle Eastern—received far more engagement than the accurate one in the roughly five hours after the attack:  

Worse, the tweet containing correct information did not perform much better over a longer time horizon, up to 24 hours after the attack:

(Data and code for the graphics above are available here.)

Taken together, Fatah’s tweets suggest that misinformation on social media genuinely is a problem. As such, they raise two questions: First, why did the incorrect tweet spread so much faster than the correct one? And second, what can be done to prevent the similar spread of misinformation in the future?

The Speed of Misinformation on Twitter

For most of Twitter’s history, its newsfeed was straightforward: The app showed tweets in reverse chronological order. That changed in 2015 with the introduction of Twitter’s an algorithmic newsfeed, which displayed tweets based on a calculation of “relevance” rather than recency. 

Last year, the company’s engineering team revealed how its current algorithm works. As with Facebook and YouTube, Twitter now relies on a deep learning algorithm that has learned to prioritize content with greater prior engagement. By combing through Twitter’s data, the algorithm has taught itself that Twitter users are more likely to stick around if they see content that has already gotten a lot of retweets and mentions, compared with content that has fewer. 

The flow of misinformation on Twitter is thus a function of both human and technical factors. Human biases play an important role: Since we’re more likely to react to content that taps into our existing grievances and beliefs, inflammatory tweets will generate quick engagement. It’s only after that engagement happens that the technical side kicks in: If a tweet is retweeted, , favorited, or replied to by enough of its first viewers, the newsfeed algorithm will show it to more users, at which point it will tap into the biases of those users too—prompting even more engagement, and so on. At its worse, this cycle can turn social media into a kind of confirmation bias machine, one perfectly tailored for the spread of misinformation.

If you look at Fatah’s tweets, the process above plays out almost to a tee. A small subset of Fatah’s followers immediately engaged with the tweet reporting a bystander’s account of the attacker as “angry” and “Middle Eastern,” which set off a cycle in which greater engagement begat greater viewership and vice versa. By contrast, the tweet that accurately identified the attacker received little initial engagement, was flagged less by the newsfeed algorithm, and thus never really caught on. The result is the graph above, which shows an exponential increase in engagement for the inaccurate tweet, but only a modest increase for the accurate one.

What To Do About It

Just as the problem has both a human and technical side, so too does any potential solution. 

Where Twitter’s algorithms are concerned, there is no shortage of low-hanging fruit. During an attack itself, Twitter could promote police or government accounts so that accurate information is disseminated as quickly as possible. Alternately, it could also display a warning at the top of its search and trending feeds about the unreliability of initial eyewitness accounts.

Even more, Twitter could update its “While You Were Away” and search features. In the case of the Toronto attack, Twitter could not have been expected to identify the truth faster than the Toronto police. But once the police had identified the attacker, Twitter should have had systems in place to restrict the visibility of Fatah’s tweet and other trending misinformation. For example, over ten days after the attack, the top two results for a search of the attacker were these

(I conducted the above search while logged into my own Twitter account, but a search while logged out produced the same results.)

Unfortunately, these were not isolated tweets. Anyone using Twitter to follow and learn about the attack has been greeted with a wealth of misinformation and invective. This is something Twitter can combat: Either it can hire an editorial team to track and remove blatant misinformation from trending searches, or it can introduce a new reporting feature for users to flag misinformation as they come across it. Neither option is perfect, and the latter would not be trivial to implement. But the status quo is worse. How many Twitter users continue to think the Toronto attack was the work of Middle Eastern jihadists, and that Prime Minister Justin Trudeau’s immigration policies are to blame?

Ultimately, however, the solution to misinformation will also need to involve the users themselves. Not only do Twitter’s users need to better understand their own biases, but journalists in particular need to better understand how their mistakes can be exploited. In this case, the biggest errors were human ones: Fatah tweeted out an account without corroborating it, even though the eyewitness in question, a man named David Leonard, himself noted that “I can’t confirm or deny whether my observation is correct.” 

To counter misinformation online, we can and should demand that newsfeed algorithms not amplify our worst instincts. But we can’t expect them to save us from ourselves.


Chris Meserole researches emerging technology, international security, and violent extremism. He is a fellow in the Center for Middle East Policy at the Brookings Institution.

Subscribe to Lawfare