Democracy & Elections

A Brief History of Online Influence Operations

Jacob T. Rob, Jacob N. Shapiro
Thursday, October 28, 2021, 10:52 AM

What does the history of online influence operations reveal about how to tackle disinformation?

Facebook, a common platform for online influence campaigns and disinformation. (Thought Catalog, https://flic.kr/p/Gep7Mv; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The Wall Street Journal’s Facebook Files series resumed last week, revealing that the platform took action against an online campaign to set up a new right-wing “Patriot Party” after the Jan. 6 insurrection. Earlier this month news outlets reported that a number of former employees excoriated the company’s content moderation practices in their departure emails. And on Oct. 25, a dozen news outlets released new stories based on yet more leaked Facebook documents. In congressional hearings on the initial Facebook leak, Sen. Richard Blumenthal succinctly captured the tone of the public sentiment, saying that “Facebook and Big Tech are facing a Big Tobacco moment.”

Salacious as these revelations may be, they raise a deeper question: How can it be that society depends on whistleblowers revealing internal studies that could not pass peer review for insight into the societal harms exacerbated by multibillion-dollar companies that hundreds of millions of Americans (and billions of people around the world) use for hours every week? 

It’s not like the stakes are low. As America’s deeply challenged vaccination effort so strikingly suggests, misleading facts, conspiracy theories and political disinformation circulating online could pose a clear and present danger to democratic society. But beyond observing the coincidence of a poor public health response and widespread misinformation, there is very little high-reliability research on the impact of online influence campaigns and disinformation.

So how did society get here? The arc of online influence efforts—or at least the policy discourse around them—can be traced back to 2004, when videos posted by Iraqi insurgents focused international attention on online terrorist propaganda. Beginning in 2011, Russia began experimenting with the use of social media for covert influence campaigns abroad, shifting attention in a new direction. By early 2017, the extent of Russian involvement in American social media was clear, and online disinformation campaigns finally began to attract sustained attention from a wide range of organizations. 

Facebook and Twitter ramped up their efforts to find and disclose disinformation. The European Commission developed a Code of Practice on Disinformation with several of the major platforms. And a range of organizations—including private firms, academic research groups and nongovernmental organizations—began publishing case studies of online disinformation efforts. And then in 2020, the coronavirus pandemic hit and countries around the world were deluged with content that could clearly be labeled disinformation. This influx of disinformation on a common topic posed an obvious threat to public health, drove the growth of fact-checking organizations in dozens of countries and triggered a new round of attention to the issue.

This history holds many lessons for what should be done now, so it is worth examining in greater detail. 

Terrorist Organizations Move Online, 2004-2011

The threat of online influence efforts first made mainstream news in 2004 as terrorist organizations in Iraq and elsewhere began using the internet to recruit, spread ideas and share knowledge. Alarmist headlines, such as “Militants weave web of terror” from BBC or “The Terror Web” from the New Yorker, became commonplace. The new genre of Jihadi beheading videos evoked particular horror. Terrorist organization websites multiplied rapidly in this period: increasing from fewer than 100 in 2000 to more than 4,800 by 2007. This coincided with the rise of social media, as the Taliban, al-Shabab and al-Qaeda developed their presence on Facebook and Twitter. 

Terrorist groups’ use of the internet was particularly worrisome to those who thought it could be an effective tool for radicalizing people without ever making physical contact. This fear was only partially justified. The internet did enable terrorist groups to cast a wider net, familiarizing individuals across the globe with their views. Furthermore, it allowed terrorists to provide explicit instructions on how to build bombs, fire surface-to-air missiles and attack U.S. soldiers with firearms. 

However, this was the extent of the terrorist groups’ abilities. They proved ineffective at generating sustained terrorist activity through online propaganda. Careful studies of radicalized individuals found that terrorist content increased the range of opportunities for radicalization and recruitment, enabled groups to stay in touch with existing supporters, and motivated the occasional lone-wolf attacks. However, it did not replace the need for physical contact in the recruiting process, as the ability to catalyze a successful attack through the internet alone remained limited. 

Russia Begins Experimenting, 2012-2014

The weaponization of the internet soon expanded beyond terrorist groups to nation-states. Online influence efforts began to evolve from 2012 to 2014, or at least the policy discourse around them did, moving away from direct attempts by terrorist groups to foster support, toward nation-states, primarily Russia, using the internet to promote disinformation to sway public opinion. 

As the civil war in Syria escalated in 2012, participants on all sides routinely posted fake videos to bolster claims about the other side’s perfidy. Pro-government activists were prominent on social media and coordinated hacking with Twitter activism to discredit organizations reporting on government human rights abuses. Russia began experimenting with new ways to help its beleaguered ally, with hackers allegedly hijacking Reuters Twitter feed in an attempt to create the impression of a rebel collapse in Aleppo. 

Russia’s efforts in Syria proved only the beginning. As the Euromaidan crisis intensified, Russia ramped up its online effort to pacify the citizens of Ukraine and distract the international community. Russian news media repurposed old pictures from Syria, Chechnya and Bosnia as evidence of murderous actions by Ukrainian fascists. On top of this, Russia sought to silence independent news sources and used internet black lists, an anti-piracy law and security systems to shape the information available within Russia. 

The Russian disinformation campaign took on a new intensity after Russian forces operating out of separatist regions of Ukraine shot down Malaysia Airlines flight 17, killing 298 people. Russia began using “troll factories”—institutionalized groups that push disinformation via social media, sometimes using automated accounts. Russia used its troll factories to discredit those linking Russia with the missile attack. Groups such as CyberBerkut and Russiya Vesna were particularly effective at monopolizing online discourse: On the majority of days between February 2014 and December 2015, more than 50 percent of tweets concerning Russian politics were produced by Russian bots.

Russia’s efforts to use online disinformation to sway public opinion were particularly evident during the Ukrainian Revolution and subsequent civil war, as Russia combined military aggression with traditional propaganda and social media campaigns to shape perceptions of the conflict and weaken public resolution within Ukraine.

The Islamic State and Russian Disinformation Increases, 2014-2016

A new and positive development between 2014 and 2016 was that the EU began to implement measures to combat Russia’s disinformation efforts. The EU set up a strategic communication task force that published a weekly review exposing Russian disinformation and sought to promote “European values” and increase awareness to combat the effects of disinformation. And the Poynter Institute established the International Fact-Checking Network to support the community exposing disinformation, online and otherwise, around the world.

While positive, the EU’s effort did not address the surge of terrorist disinformation as the Islamic State spread across Iraq and Syria, pushing an innovative multimedia campaign filled with horrific murders and first-person video intended to appeal to a generation steeped in first-person shooting games. Operated by 500 to 2,000 hyperactive users, the Islamic State’s Twitter presence boasted more than 46,000 Twitter accounts with an average of 1,000 followers each between September and December 2014. This effort was more effective at targeting the millennial generation than previous attempts in the early 2000s because the Islamic State’s strategy specifically targeted lonely and isolated Americans and other Westerners in search of a network, community, and a sense of purpose and belonging.

Notably, the Islamic State’s use of the internet as a propaganda tool went largely unchecked for several years, until the creation of the Global Internet Forum to Counter Terrorism (GIFCT) in 2017. 

Targeting U.S. Politics, 2015-2017

Russia’s direct targeting of the 2016 U.S. election marked a turning point in the use of online information campaigns. Russian operatives used social media to stoke tension on hot-button political issues, support Bernie Sanders during the primary and target Hillary Clinton during the general election. Combined with hack-and-leak operations targeting a senior Clinton aide and the Democratic National Committee, as well as efforts to break into multiple states’ voting systems, the Russian influence operation represented an unprecedented level of direct interference in U.S. politics by a foreign power.

While a few researchers identified the threat before the election, their efforts to galvanize public attention were largely ignored. As the extent of Russian influence efforts became clear in the weeks and months after the election, much of the U.S. news media downplayed the threat of disinformation. One line of argument asserted that it was counterproductive to cover the problem as doing so would erode trust in traditional news media and draw further attention to erroneous claims. Others argued that nation-state disinformation was a lesser problem than the degradation of journalistic standards. As one prominent scholar stated in a 2018 Washington Post op-ed: “[T]he fundamental driver of disinformation in American politics of the past three years has not been Russia, but Fox News and the insular right-wing media ecosystem it anchors.”

After the 2016 election, attention to online disinformation proliferated. Scholars began focusing on the issue, including systematically documenting exposure to fake news. In June 2017, Facebook, Microsoft, Twitter and YouTube established the GIFCT, which created a bank of hashed terrorist content that companies could use to prevent the materials from circulating. In mid-2017, a group of researchers teamed up with the Alliance for Security Democracy to publish a dashboard tracking Russian activity on Twitter. They uncovered ongoing Russian efforts to push on both sides of a range of hot-button political issues, from gun control to police violence to race relations. Although the effort was criticized for a lack of transparency, reluctance to identify specific accounts, and limited coverage, it set a precedent for future data releases and reporting.

The Mueller Indictments and Platform Transparency Change the Game, 2018-2020

In 2018, online influence efforts finally began to receive the attention they warrant. The watershed moment was the February 2018 indictment of a dozen Russians involved in the Internet Research Agency (IRA) troll factory. While the international community had grown concerned over Russia’s interference in Syria and Ukraine, and while reliable reporting on the IRA dated back to 2015, the detailed revelation that Russia aggressively targeted the U.S. presidential election was uncharted territory. As in Syria and Ukraine, Russia used state-funded media outlets that published slanderous news regarding the Clinton campaign and posted socially divisive content on Facebook, Twitter and YouTube. Content from the Facebook effort alone reached more than 126 million Americans.

Russia’s efforts to influence U.S. politics came as a cruel shock to American society. American academics began studying the surge in influence efforts, assessing its relative prevalence on different platforms and carefully documenting how governments used online campaigns. Then, in January 2018, Twitter alerted approximately 1.4 million people that they had interacted with one of the 3,841 IRA-affiliated accounts the company had identified. In May, Facebook announced a new partnership with the Atlantic Council’s Digital Forensic Research Lab, in which Facebook would share information to better document coordinated inauthentic behavior. And in October, Twitter released more than 10 million tweets produced by the IRA accounts and 770 fake accounts associated with Iran. (Since then Twitter has released several dozen datasets on content removed for being part of inauthentic information operations.)

As awareness grew about the challenges posed by disinformation spread through social media, researchers began to ask why this problem seemed so serious. Some argued the key was the lower cost of sharing content that enabled disinformation to spread so quickly. Others argued that the algorithms that curate content were to blame, particularly earlier versions that simply amplified the most-viewed content without any filters. These simple algorithms were especially problematic because disinformation tends to be salacious and interesting, helping it gain traction before true information, which creates a snowball effect that enables fake news to dominate social media platforms. 

These perspectives led to arguments that social media platforms ought to take greater steps to slow the spread of disinformation, which in turn led social media platforms to begin taking concrete, public action. Facebook published four steps it claimed would address the issue: creating a button to report a post as fake news, employing software to identify fake news, reducing financial incentives for the spread of disinformation, and ensuring that reported posts were sent to fact-checking organizations. Twitter implemented algorithmic changes to combat disinformation: reducing the visibility of suspicious accounts in Tweet and account metrics, requiring new accounts to confirm an email address or phone number, auditing existing accounts for signs of automated sign-up, and expanding malicious behavior detection systems. Google, by contrast, took few steps to proactively address disinformation circulating on YouTube and was heavily criticized by the European Commission for failing to do more.

The last significant development in this period was the growth of organizations that systematically documented various influence efforts. This new ecosystem had academic research centers such as the CSMap Lab at New York University, Clemson University’s Social Media Listening Center, and Cardiff University’s OSCAR Center, as well as think tanks that combined research with policy advocacy, including the German Marshall Fund’s Alliance for Securing Democracy and the Australian Strategic Policy Institute’s Cyber Policy Centre. Several groups established cooperative relationships with Facebook, most notably the Digital Forensic Research Lab and the Stanford Internet Observatory, enabling them to contextualize content the company was removing, providing vivid examples of online disinformation. And a few for-profit companies, such as FireEye and Graphika, published regular reports on the topic as part of their business development efforts, providing a valuable public good in addition to supporting corporate goals. Collectively these organizations created an implicit catalog of political disinformation, which proved invaluable as one of us built data on online influence operations around the world since 2011.

 The Coronavirus “Infodemic” Makes Online Disinformation Front Page News Worldwide, 2020 

Measures implemented in response to the 2016 surge of disinformation proved insufficient to handle the influx of disinformation that accompanied the coronavirus pandemic. False narratives such as the following proliferated around the globe: the virus originated from bat soup, it was created as a bioweapon, and holding your breath for 10 seconds determined whether or not you were infected. False information became so prevalent in early 2020 that the World Health Organization coined the term “infodemic” to describe the scale and speed with which disinformation surrounding the pandemic spread. This phenomenon was exacerbated in the U.S. by President Trump sharing disinformation and eroding trust in federal health advice. 

By compromising trust in health officials and discouraging behavior that could contain the virus, the infodemic likely enabled the spread of the virus. A study conducted in Germany in June 2020 found that 50 percent of respondents had inadequate levels of coronavirus-related health literacy. Worse, although respondents in the study all felt well informed, they reported having trouble judging whether or not they could trust media information surrounding the pandemic. 

Perhaps the one positive effect of the pandemic is that it catalyzed further evolution of the global fact-checking community, which now responds rapidly to new threats to the information environment. Media literacy companies such as NewsGuard established new partnerships with software companies to bring their tools to larger audiences. In June, the European Commission asked the major social media companies to begin providing monthly reports on coronavirus-related disinformation. These reports showed that such content continued to spread on all major platforms through at least June 2021.

Global Push for Legislation, 2021 

Today, nation-states and global organizations are moving toward widespread legislation to combat disinformation. The EU has taken noteworthy steps toward effective policy with its voluntary Code of Practice on Disinformation created in October 2018. The intent behind the code is for it to serve as a co-regulatory instrument fostering cooperation between EU-level/national-level authorities and the digital platforms, as outlined in the draft Digital Services Act

At the company level, Google, which had long been inactive with respect to most kinds of disinformation, began taking concrete steps beyond its long-standing actions against terrorist content. In May the company introduced new warnings to alert users of data voids: the fact that searches on unusual terms and subjects for which there are few credible sources tend to get filled with disinformation. YouTube began taking more aggressive action against some kinds of disinformation as well, particularly vaccine-related content, which it banned in late September. Thus far, YouTube has removed the channels of several popular disinformation spreaders.

Federal steps taken in the U.S. toward curtailing online disinformation came in October 2017, when Congress announced a bill that regulated political ads on social media, the Honest Ads Act. However, while federal policy has remained limited, 24 state governments have taken steps to improve media literacy. Similar efforts have been pursued in Australia, Canada, Belgium, Denmark, Singapore and Nigeria, among others. 

While the spread of anti-disinformation policy represents progress, it has a dark side. Governments of all kinds have used the need to combat the coronavirus infodemic as an opportunity to increase censorship and silence dissenting voices. The Indian government serves as a particularly poignant example, as its “Information Technology Rules, 2021” appear to encourage self-censorship and undermine privacy. Similarly problematic laws have been implemented in Egypt, China, Russia and other countries. The challenge of such laws is summed up nicely by International Press Institute Deputy Director Scott Griffen, who argues that “while combating online disinformation is a legitimate objective in general, handing governments and state-controlled regulators the power to decide what information is true and what is false is a dangerously wrong path.” 

Combating disinformation requires a delicate balancing act between censoring fake news and preserving freedom of speech. While the EU’s co-regulation approach has not proved particularly effective thus far, it has encouraged action, preserved free speech and avoided overstepping the bounds of censorship.

Conclusion

When online influence operations burst onto the international scene in 2004, they appeared to be a minor threat. While terrorist groups were able to connect with individuals across the globe, their ability to wield influence proved insignificant. But as nation-states adopted online influence efforts and news consumption moved online, disinformation developed into a serious challenge for modern society. The covert use of social media to influence politics by promoting propaganda, advocating controversial viewpoints and spreading disinformation has become a regular tool of statecraft, with at least 51 different countries targeted by government-led online influence efforts since 2011 (and many of those lasting for years on end).

The extent of this problem was first widely recognized following the revelation of Russia’s influence efforts in the 2016 U.S. presidential election. This realization and the steady spread of fallacious content kicked off a strong global response; fact-checking organizations proliferated, and countries began implementing a raft of new policies. But these efforts proved insufficient to manage the infodemic that accompanied the coronavirus. This history provides a number of lessons. 

First, data released by the companies that own the commons on which online disinformation spreads can play a key role in creating awareness and catalyzing action. Twitter’s relatively open application programming interface (API) and robust data releases have enabled a substantial body of serious academic research that provides a richer understanding of how and why disinformation spreads. Facebook’s selective releases to the Digital Forensic Research Law and Graphika provided visceral examples of disinformation campaigns that helped motivate further action across society. While more data will always be better for researchers, even limited, cooperative data sharing can advance society’s understanding of this problem.

Second, dealing with online disinformation requires a wide range of actors. Independent nongovernmental organizations such as Poynter create societal capability. Analytics companies like FireEye and Graphika address different knowledge gaps than academic researchers. By making certain facts unambiguous, government investigations and legal indictments justify platforms’ efforts and catalyze legislative attention. The world now has a grassroots network finding disinformation. Major companies operating information commons are paying attention to this issue, albeit with varying degrees of success. And governments are working on regulations. So what is missing?

Third, the necessary institutions for rapidly developing knowledge on the real-world impacts of disinformation and how it moves across platforms do not currently exist, as others have noted. The research community simply is not producing the knowledge needed to address the trade-offs inherent in regulating online speech. For example, one of us recently completed comprehensive reviews of more than 80 studies relevant to understanding the effect of influence operations and more than 200 related to the efficacy of countermeasures to it. Evidence on the former is very limited, and the research base on countermeasures is tiny outside of fact-checking and excludes almost all the most important things platforms do, such as deplatforming and algorithm changes. What’s worse, almost all studies focus on Western populations, which collectively represent a small share of those affected by disinformation.

As a practical matter, completing high-reliability studies on disinformation and influence operations requires a heavy engineering and data science lift before one can even get to deep social scientific questions such as “What effect did Twitter alerting 1.4 million people that they had been interacting with Russian trolls have on their online behavior and offline political engagement?”

Effectively addressing such questions requires a larger scale research institution that can realize economies of scale in collecting and processing data on online behavior, the equivalent of a CERN for the information environment, as one of our co-authors describes it. CERN, the European Organization for Nuclear Research, is a large research institution on the French-Swiss border that hosts hundreds of scientists working on its particle accelerator complex. Particle accelerators benefit from economies of scale; larger ones generate higher energies, enabling experiments that are not possible with smaller equipment. As a result, Europe’s investment in one major multinational nuclear research center, as opposed to each country building its own smaller facility, enables new discoveries, with the side benefit of creating a place where scholars from the Global South can contribute to research

So what would the equivalent look like for understanding the information environment? First, it would have a permanent research staff who could manage ongoing collections and develop a research software codebase to reduce the data preparation burden for studies. Second, it would have a visiting fellows program for researchers from academic institutions and industry. If properly structured, such a program would help both sides develop richer contextual understandings, foster new studies, and create opportunities for developing country scholars to advance their work and build connections with the research teams at companies. Third, it would have physical infrastructure making it easier to address security concerns with private/proprietary data, as the U.S. Census Bureau’s Research Data Centers currently do with other kinds of information while facilitating collaboration and knowledge transfer.

Such an institution would address many of the problems Alicia Wanless identified in her review of what’s working or not in research on influence operations. As she wrote, “[T]he pace and proliferation of influence operations outstrips the research and policy community’s ability to study, understand, and find good solutions to the problems presented by the new age of digital propaganda.” With the right institutions, that can be fixed.


Jacob T. Rob is a second lieutenant in the U.S. Army and graduated in 2021 with a degree from the School of Public and International Affairs at Princeton University. The views expressed here are those of the author and do not reflect the position of any organization.
Jacob N. Shapiro is a professor of politics and international affairs at Princeton University and director of the Empirical Studies of Conflict Project. The views expressed here are those of the author and do not reflect the position of any organization.

Subscribe to Lawfare