Cybersecurity & Tech Democracy & Elections

A Battle for Better Information

Kate Starbird
Monday, September 18, 2023, 10:34 AM
Researchers working to mitigate online election lies are facing multifaceted attacks, but with 2024 looming, that critical work continues.
"Man Reading." (LinkedInSalesNavigator, https://tinyurl.com/4z9pckrn; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

For more than a decade, I have watched and studied how innocent people—the parents of shooting victims, humanitarian responders, public health workers, and election officials—come under attack online from conspiracy theorists and disinformation campaigns that confuse facts, mislead audiences, and destroy reputations.

This year I crossed over from researcher to subject. I became the focus of false rumors, conspiracy theories, personal harassment, congressional investigations, and even physical threats. Many colleagues, both in my home institution at the University of Washington and at other schools around the country, are experiencing the same. Collectively, we worry about the security and well-being of our students and colleagues, as well as the long-term impacts on our field and on academic freedom more broadly. Already, we are coping with a chilling effect. We have seen prominent voices in public debates go silent, encumbered by legal stress and institutional pressure, during a time when we arguably need them the most.

As we navigate this multifaceted and sustained assault on disinformation researchers, it’s important to take stock of the implications for our country and, most immediately, the upcoming 2024 elections. 

In recent years, scholars have converged around the study of online falsehoods, including accidental misinformation and intentional disinformation, forming a new academic field. This multidisciplinary field brings together researchers from computer and information science, sociology, psychology, cybersecurity, and more. It relies on scientific methodologies and produces peer-reviewed scholarship. In nonacademic terms, we study the spread of misleading information and its impacts on individuals, public discourse, and society.

We don’t make policy. Despite accusations to the contrary, we don’t act as censors of content, individuals, or news media. We conduct scientific research to better understand how we, as humans, are vulnerable online to deception and manipulation—and how we, as a society, might go about mitigating these vulnerabilities. In the spirit of public scholarship, many of us share our findings beyond the confines of academia to help inform policymakers, educators, journalists, social media platforms, election officials, and the broader public.

It is impossible to understand the disinformation phenomenon without considering the role of social media, which mediate the flow of information and are actively exploited to manipulate users. Social media companies face ongoing challenges of mitigating harmful misinformation and deception, and open questions about how to ensure healthy discourse while protecting commitments to free speech. They also house the digital trace data—for example, capturing content sharing and user engagements—that, when made available for independent research, can be analyzed to identify rumors and map disinformation campaigns. To support transparency and facilitate knowledge exchange, social media companies and disinformation researchers have, in the past, developed open lines of communication with each other. 

The challenges of online mis- and disinformation are especially salient during elections. Falsehoods about election processes often start as sincere misinterpretations and are later amplified for partisan political gain. They can also be seeded or spread by foreign disinformation campaigns, as observed in 2016. In some cases, misleading information can result in voter disenfranchisement, for example, by confusing people about when or where to vote. In others, it can have the impact, intended or not, of undermining trust in election results. On Jan. 6, 2021, for example, Americans learned how false claims about elections can threaten the foundations of democracy.

It may seem unintuitive, but the events of Jan. 6—and the role of false claims of voter fraud in motivating and justifying those events and other efforts to overturn the 2020 election—are deeply connected to the modern attacks on disinformation researchers like myself. 

In 2020, my team and I participated in a collaborative project that sought to help mitigate harmful misinformation about the U.S. election. At the University of Washington, we applied our social media research methods to identify, analyze, and communicate about false, misleading, and unsubstantiated claims about election processes and procedures. We primarily produced public-facing tweet threads and articles alerting audiences to specific falsehoods and describing patterns in the online spread of election misinformation. But the project was far broader than our group, incorporating numerous collaborators and external partners.  

In one aspect of the larger team’s work, researchers connected with local and state election officials—the under-resourced workers tasked with facilitating elections from start to finish—who could alert us to rumors they were seeing, help us assess rumor veracity, and provide accurate information about how their elections work.

In another element of the project, researchers shared insights with social media platforms about content that seemed to violate their “civic integrity” policies, such as information that misled people about voting times or locations and misleading content that sowed doubt in election results. This work was intentionally bipartisan, focused exclusively on claims about election processes and procedures, and intended to defend the integrity of the election by ensuring that people had accurate information about when and where to vote and about the integrity of the voting process.

Now, this project has become the focus of online conspiracy theories, lawsuits, and partisan congressional investigations that grossly mischaracterize our work, framing a bipartisan effort to help mitigate falsehoods about election processes as “government censorship of conservatives.” Following a common trajectory in the internet era, these smears began with cherry-picked “evidence” and willful misinterpretations by online conspiracy theorists, spread via opportunistic amplification by online personalities, and were eventually laundered into the public record through congressional testimony. 

Beyond the political spectacle and Substack subscription sales, these efforts seek to discredit us and our now-peer-reviewed research documenting the lies that helped motivate and justify the events of Jan. 6. They may also hamstring future efforts to support local and state election officials in countering election falsehoods. And they are already having a chilling effect on the field of online disinformation more broadly.

We have our work cut out for us in the next election cycle. The 2024 U.S. election looms like a gathering storm—a large portion of the population has lost trust in the process, and there is political motive and dedicated infrastructure in place to quickly amplify election-related rumors.

False rumors will likely flourish and disinformation campaigns (orchestrated by various actors) will take off during the primaries. Disgruntled party members will likely express doubt in the results if their candidate loses, and members of other parties—as well as foreign disinformation agents—will be incentivized to amplify those doubts. 

Meanwhile, social media platforms have stepped back from both transparency and moderation. In the wake of the Jan. 6 attack on the U.S. Capitol, social media platforms were criticized for failing to enforce the civic integrity policies that may have helped stem the rhetoric that motivated the violence. Instead of strengthening enforcement, as we head into the 2024 election, those policies have been walked back.

Quite problematically, the platform formerly known as Twitter (now X), once a vital resource for real-time news, has abandoned many of the policies and design innovations that protected users from harassment, deception, and manipulation. For example, in April of this year, Twitter removed its “state-affiliated media” labels that allowed users to identify content coming from government-controlled media. Simultaneously, companies including X have powered down their free application programming interfaces, denying researchers and journalists access to data sources that we relied on for years to rapidly identify, analyze, and distinguish between organic rumors and disinformation campaigns.

So, what do we do? 

Our top priority must be ensuring that U.S. voters accurately understand when, where, and how to vote. Right now, election officials on the front lines are severely outmatched by the forces arrayed against them. And it is not clear whom they can turn to for help. They need more federal funding to stage resources and build out communication strategies and teams. They need training and education about how to identify rumors as well as when and how to respond. And they need clear directives about whom they can turn to for support. Can they receive trainings about disinformation tactics? Can they work with independent researchers to increase their capacity to identify rumors? Can they share accurate information with social media platforms to refute a viral rumor? Should they be turning to the federal government for operational help, or not? This may sound straightforward, but recent court rulings and congressional investigations have put much of it in doubt. Fortunately, the Sept. 8 decision by the U.S. Court of Appeals for the Fifth Circuit rescinded some of the worst elements of the earlier injunction that placed overbroad limitations on communication around these needs. But there are still open questions that need to be answered. Quickly.

Second, policymakers need to address the issue of social media transparency. Twitter data was useful in its own right—and its accessibility put pressure on other platforms to be more transparent, giving observers a glimpse into the inner workings of social media giants. Those days are gone. As described above, Twitter, now X, has changed its data access policies, essentially pricing out academic researchers and journalists. Other platforms, including Reddit, have followed suit. This is a troubling trend. Without visibility into the dynamics within these platforms, independent researchers have no way to track the spread of rumors and disinformation campaigns—or to evaluate the impact of the platforms’ recommendation systems and moderation decisions. Events like the tragic Maui fires have underscored what has been lost. The quality of real-time information on X seems to be significantly lower compared to Twitter a year ago, but researchers are unable to measure any decrease with precision. If we are to have a chance to combat online manipulation and hold platforms accountable for their negative impacts, we need policies that push platforms back on a path toward transparency. This can, and should, be done in ways that protect the privacy of individual users, while providing insights into macro-level patterns and the role of public figures in shaping information—and disinformation—flows.

***

My final comment here is a personal one. This has been a trying year for my team and for many other researchers working in this space. We lost time and wasted resources defending ourselves against various disingenuous attacks. While we struggled to regain our bearings—to understand what was happening and figure out how best to respond—we stepped back from public scholarship. For a time, our social media accounts grew quiet and we declined media requests.

But I want to be clear: Our work goes on. At the University of Washington’s Center for an Informed Public, we have dozens of students, postdoctoral scholars, research scientists, and faculty who remain deeply committed to pushing this research forward—to advance scientific understandings around how rumors, misinformation, and disinformation flow across our modern information landscape and to inform potential solutions for mitigating the harms of misinformation and manipulation at scale. Our “rapid research” program, which supports real-time analysis and public communication about online rumors, remains active. We are currently working with collaborators (old and new) to prepare for the 2024 election.

From speaking with colleagues around the country, I can say with confidence that we are not alone in this fight. We will not be deterred by efforts to silence us.


Kate Starbird is co-founder of the University of Washington Center for an Informed Public.

Subscribe to Lawfare