Criminal Justice & the Rule of Law Cybersecurity & Tech Democracy & Elections

A Tale of Two Insurrections: Lessons for Disinformation Research From the Jan. 6 and 8 Attacks

Dean Jackson, João Guilherme Bastos dos Santos
Monday, February 27, 2023, 6:11 PM

Trespassers on the ramp of the Brazilian Congressional Palace on Jan. 8, 2023. (Marcelo Camargo, https://tinyurl.com/bdzrnjcc; CC Attribution 4.0 International, https://creativecommons.org/licenses/by/4.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

This year’s Jan. 8 attack on Brazilian democracy drew quick comparisons to the storming of the U.S. Capitol almost exactly two years prior. Both riots sought to overturn the election results of the preceding presidential election, in the U.S. by preventing its certification and in Brazil by provoking intervention from the country’s armed forces. Both were motivated by fictitious claims of election fraud. Both took place following weeks of rallies, with buses to the capital city seemingly paid for by supporters of the losing presidential candidate. And both were organized in large part over social media and private messaging apps. 

As the public learns more about the Jan. 6 committee’s investigation into the role played by social media in the U.S. insurrection, the clear parallels between the two events have shed light on important lessons for efforts to respond to the twin challenges of digital disinformation and growing anti-democratic extremism. 

The first lesson is that social media has become central to the modern extremist landscape, often supplanting affiliation with formal organizations. Extremists can mobilize far more effectively on digital platforms than they can through formal organizations alone. While the Jan. 6 committee’s final report spotlighted the role of militias and extremist groups like the Oath Keepers and Proud Boys, members of these groups represented a small minority of rioters at the Capitol. The presence of so many unaffiliated rioters in Washington suggests something that was also true for Brasilia: The spread of election disinformation and extremist rhetoric was a more effective motivator than membership in established groups with public leaders and logos. In Brazil, election-deniers set up camps in front of army bases across the country in the weeks leading up to Jan. 8; these eventually developed into what Brazil’s justice minister called “incubators of terrorism.” These camps were physical echo chambers: Like-minded individuals shared feverish, fantastic claims about current conspiracies, future plans, and even fictional events of the past, including stories of military intervention to prevent President Luiz Inacio Lula da Silva’s swearing-in ceremony. 

Alex Newhouse, a violent extremism researcher who worked with the Jan. 6 investigation, made a similar point in a piece titled “The Threat Is the Network,” writing that “enforcement against individuals and groups is necessary but not sufficient for mitigating the threat.” Meta reached similar conclusions in late 2020 when the company adapted to dangerous, unstructured phenomena like QAnon with a policy against “militarized social movements and violence-inducing conspiracy networks.” According to the Jan. 6 committee’s investigation, in the weeks before the attack Facebook staff asked the company to use this policy to head off the rapidly growing “Stop the Steal” movement, but executives demurred. A leaked postmortem on Jan. 6 and Stop the Steal by an internal task force highlighted the company’s challenges responding to “coordinated harm.” Later that year, Meta released a policy detailing its efforts to disrupt networks conducting coordinated social harm campaigns on the platform. 

A second lesson is that election integrity efforts cannot stop on voting day. The period between the election and inauguration day in the United States and Brazil is roughly the same; but unlike the United States, Brazil has a nationwide electronic voting system that allows it to tally the vote in hours. This much smaller gap between voting and the official result helped prevent a movement equivalent to Stop the Steal from gaining momentum in Brazil before Lula’s inauguration (though false claims of fraud still circulated). This suggests the United States might benefit from changes to its decentralized voting system.

But electoral reforms will not be sufficient to safeguard against future attacks: While the Jan. 8 attack came after Brazil’s presidential inauguration, it came nonetheless. Both countries would benefit from a more prolonged period of prudence by social media companies following election day (if not permanently). Consider that, in the United States, Facebook rolled back several temporary emergency “break glass” measures in December 2020; in Brazil, Twitter laid off almost its entire staff in November 2022. Both decisions came after the votes were tallied and weeks before the respective attacks.

The third lesson is that humans are at least as important as algorithms for spreading disinformation and mobilizing the attackers. While much of the debate over social media’s political impact revolves around the role of recommendation algorithms in radicalizing users, the conversation should not stop there. The riots in Washington, D.C., and Brasilia were the result of human organizers taking advantage of digital platform features in a variety of ways. Many relevant platforms like Telegram and WhatsApp have no distribution algorithms inside of group chats. 

Individuals used social media to disseminate the narrative justification for the Stop the Steal movement and to coordinate the rally that eventually became the Capitol riot. They were able to do so because Facebook Groups is a powerful organizing tool, but not because of algorithmic amplification—most users joined the groups through invites from a small handful of super-inviters. In Brazil, social media companies again missed the warning signs of mobilization for a violent attack. Far-right influencers and online activists trumpeted false stories about election fraud, made thinly-if-at-all veiled calls for violence, shared details on how to reach Brasilia to participate in the attack, and broadcast their actions live. The omission of members of the security forces on the day of the attack played a key role.

Providing this kind of megaphone and mobilizing power is a more immediate contribution to the attacks than gradual radicalization over time, and one that companies have a responsibility to prevent regardless of any causal relationship between social media and political violence.

The fourth lesson is that platforms must develop more proactive and robust content moderation responses to networked threats like these. Both attacks were preceded by clear warning signs, such as a swell of concerning activity that did not necessarily cross the line into platform rules violations. Before Jan. 6, Facebook debated but did not impose restrictions on false claims of election fraud, creating policy holes that prevented a holistic response to Stop the Steal. The Jan. 6 committee’s investigation found that YouTube restricts the algorithmic recommendation of content close to the “borderline” of a policy violation; but in Brazil, research suggests that videos from far-right influencers spreading election fraud conspiracy theories were among the most important in the network of related videos suggested by YouTube for people searching for votes and Superior Electoral Court. Were those videos assessed by content moderators as borderline? If not, perhaps the border should be moved; if so, why were they so often recommended to users? 

Contrast Facebook’s and YouTube’s policies with the hate speech policy that Reddit used to shut down infamous pro-Trump subreddit r/The_Donald: According to the draft memo on social media produced by the Jan. 6 committee, the policy “allowed Reddit to look more broadly at community dynamics when determining whether to take action against a subreddit.” A parallel is the draft policy against implicit incitement of violence that Twitter wrote but did not implement in the weeks before Jan. 6: Both rely on an understanding of on- and off-platform context to make decisions about what actors, behaviors, and content are allowed. Content moderation needs to allow for more flexible, proactive, context-aware action by companies—which will open them to political backlash they work hard to avoid, making transparent creation and enforcement of rules even more important.

At the same time, content moderation within individual companies is insufficient, especially with a growing number of platforms with different rules, features, and even ideologies. This is the unfortunate fifth lesson: The threat of disinformation-inspired violence is a cross-platform phenomenon that will become only more difficult to contain as the number of platforms grows. YouTube played a role in the Jan. 8 attack, but it did so in conjunction with other platforms. If YouTube removes a video, the downloaded files are often shared over WhatsApp, allowing the video’s content to be shared even after it has been taken down. If YouTube merely downranks a video in its recommendation and search algorithms, the link to that video can similarly travel far on WhatsApp, Twitter, Facebook, and other services.

Other video platforms, like TikTok, share similar challenges. A particularly interesting one is Kwai—a novel platform with almost no content moderation safeguards. In fact, two days after the Jan. 8 attack, former President Jair Bolsonaro himself posted a Kwai video on his Facebook page suggesting that President Lula was not elected but chosen by the Superior Electoral Court and the Supreme Court. He deleted the video from Facebook shortly after—but gave a hint of where to find more.

This is a problem beyond individual platform content moderators. As Mark Scott wrote for Politico about the attack in Brasilia:

Telegram channels promoted YouTube videos. Facebook posts directed people to Gettr content. Everything is cross-platform, making it next to impossible to stop these messages from spreading widely …. This cross-platform strategy shows that such organizers of offline violence know how to exploit the differences within social media companies’ terms of service and/or content moderation policies to maximum effect. The actual planning of the Brazilian riots … took place in encrypted online spaces that have almost no policing. The promotion of those actions then shifted to more mainstream networks that were exploited to reach the largest online audience possible.

These words just as easily could have been written about Jan. 6. Threat investigators already look for warning signs across many platforms to understand how they are used in conjunction to cause offline harm. Social media research must do the same. But the landscape shifts far faster than regulators or most researchers have been able to adapt. Analysis of one particular platform or interventions designed to prevent the spread of disinformation on it will have only a limited impact on the ecosystem as a whole. 

A sixth and final lesson is that, though the threat of future insurrections persists, addressing social media’s role will entail real trade-offs. The attacks of Jan. 6 and 8 provide a model for future digitally mobilized coup attempts by anti-democratic activists. But many of the most effective things social media companies could do to disrupt the threat would also diminish the platforms’ value to activists mobilizing for more noble causes by making it more difficult for users to quickly connect with others, build movements, and share information. 

For example, many of the “break glass” measures Facebook deployed in anticipation of threats to the 2020 U.S. election specifically made it more difficult to join groups and slowed the spread of content. Some of them generated high rates of false positives—permissible speech that was downranked anyway after machine learning guessed wrongly that it violated policy. At the time, this was seen as the price of civic safety, but many of these measures were famously rolled back after the election. Today, they reveal real questions about which values should guide the platforms that play host to free expression online: Should the ability to reach millions be given, or earned? When it comes to detecting harmful content, is it worse to mistakenly remove speech or wrongfully allow it?

Similar questions revolve around the culpability of the influencers and activists who fanned the flames of election denial and spurred extremists to action. In Brazil, this is exemplified by the debate between Intercept founder Glenn Greenwald (who lives in Brazil) and Brazilian journalist Celso Rocha de Barros about the actions of Alexandre de Moraes—the Supreme Court justice who led inquiries on digital militias and “fake news” and who, as head of the Superior Electoral Court, took an aggressive stance against disinformation and later had the security officials who failed to secure the capital on Jan. 8 arrested. Greenwald considered measures against online profiles that doubted the election excessive. By contrast, Rocha de Barros was favorable to those measures, considering the seriousness of the calls for action that resulted in violence. On Jan. 25, Moraes fined the messaging app Telegram 1.2 million Brazilian reals (about $235,000) for failing to comply with a court order to suspend the account of far-right influencer and congressman-elect Nikolas Ferreira. 

Consensus answers to the above dilemmas could form the basis of shared norms for governing free speech in the machine learning age. Settling these debates could take content moderation out of its defensive crouch and enable more proactive efforts to detect and prevent the next attack on democracy from being mobilized online. But societies should not expect corporations to answer these questions for them—only democratic deliberation can do that. 


Dean Jackson studies democracy, media, and technology. As an analyst for the January 6th Committee, he examined social media's role in the insurrection. Previously, he also managed the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace and oversaw research on disinformation at the National Endowment for Democracy. He holds an MA in International Relations from the University of Chicago and a BA in Political Science from Wright State University.
João Guilherme Bastos dos Santos is a researcher at the Brazilian National Institute of Science and Technology for Digital Democracy (INCT.DD), member of the Carnegie Endowment's Partnership for Countering Influence Operations' (PCIO) and Data Analyst at Internews. He is a doctorate in Communication at Rio de Janeiro State University including a doctoral visit supervised by Stephen Colaman at the University of Leeds. Currently focused on applied research and coding to deal with rumors, misinfomration, influence operations and disinformation campaigns.

Subscribe to Lawfare