Cybersecurity & Tech Democracy & Elections

Moving Beyond Fears of the ‘Russian Playbook’

Camille François
Tuesday, September 15, 2020, 8:01 AM

Tackling disinformation requires humility, calm and attention to details as the threat evolves and becomes more complex: Tropes such as the “Russian playbook” are no longer helpful ahead of the November election. 

Mark Zuckerberg speaks about disinformation at Facebook's F8 2018 Keynote. (Anthony Quintano, https://flic.kr/p/26F9rYm; CC By 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In February 2019, I sat in a French bakery in Charlotte, North Carolina, interviewing A., an activist and community organizer who had been targeted by the Russian “troll farm” known as the Internet Research Agency (IRA). Together, we’re doing our best to reconstitute his old conversations on Facebook Messenger with users who turned out to be Russian trolls. The conversations took place starting in 2016 and throughout 2017. The difficulty is that we only have one side of the conversation: We can read his messages, but the platform removed all messages that came from the now-deactivated troll account. A. and I reexamined many months of his messages and replies to “Helen,” one of the fake accounts operated by the IRA. “Helen” had asked how to support the campaigns he was organizing on the ground in North Carolina. Another account had been on standby to design and format any posters he might need for his rallies. We joked that this troll’s nickname should be the “flier troll.”

But A. also had a serious question: At what point should he have known that he was being manipulated? What were the telltale signs of the “Russian playbook”? How does one “spot” a Russian troll, and how can activists keep an eye out for these suspicious signs in 2020?

These are complicated questions. While I could provide A. with general pointers on how to assess the likelihood that an account owner on social media is not who they say they are (for instance, by reverse-image searching profile pictures or assessing the timeline of posts), there really isn’t a set of specific things to watch out for that Russian trolls have used consistently across the years and against their targets.

Yet the idea of a single “Russian playbook” appears again and again in discussions of social media manipulations. Politicians, news outlets and commentators have used the phrase over and over for the past three years. It has mutated into an amorphous concept that can accompany almost any news cycle: Black Lives Matter protests should be wary of being infiltrated by Russians, per the Russian playbook! Chinese diplomats are now more vocal on American social media platforms, adopting the Russian playbook! Trump is spreading disinformation on social media and following the Russian playbook! Iran also runs information operations—one more for the Russian playbook!

This is not a good habit as the United States heads toward the November 2020 election.

For the past five years or so, I have been analyzing and exposing various foreign information operations on social media—spending months at a time in social media data scrutinizing posts, interviewing targets and former trolls alike, and working with social media platforms to analyze technical signals that betray manipulative actors impersonating activists, journalists and politically engaged citizens. I worked for Google at the time of the last U.S. presidential election, and then with the Senate Intelligence Committee in 2017 to analyze the IRA’s activities across platforms, and documented multiple successful and unsuccessful attempts at manipulating political conversations on social media. As Graphika’s chief innovation officer, I now lead a team of investigators and analysts who independently and routinely investigate and expose these issues.

Over the years, I have come to agree with the writer Peter Pomerantsev’s wise observation that the Russian playbook is akin to a Russian salad: not very Russian, and with different ingredients every time.

This isn’t to say that these contemporary campaigns aren’t rooted in long traditions. Thomas Rid’s recent book, “Active Measures: The Secret History of Disinformation and Political Warfare,” points to many tactics that will later surface in these campaigns: the targeting and infiltration of activist communities, the consistent use of forgeries and the appetite for highly divisive political controversies with an element of truth to them. But this tradition doesn’t manifest in a fixed and clear “playbook.” Rather, it yields a series of experiments and rapidly evolving tactics used by a diverse group of actors who are competing to outdo one another in their creative targeting of online conversations. And, more importantly, these actors evolve quickly to adapt to public reactions to their campaigns and to the latest efforts to detect their work.

The failure to recognize the diversity of these techniques in contemporary active measures—or the plurality of actors involved—leads to an oversimplified view of Russian interference, and the prominence of this oversimplified view doesn’t bode well for U.S. election security efforts. When applied to behaviors that are classic components of deceptive campaigns online (from fake accounts to divisive headlines), the “playbook” trope frames these techniques with an obsessive focus on one actor alone—when in reality, Russia is neither the most prominent nor the only actor using manipulative behaviors on social media. This framing ignores that other actors have abundantly used these techniques, and often before Russia. Iran’s broadcaster (IRIB), for instance, maintains vast networks of fake accounts impersonating journalists and activists to amplify its views on American social media platforms, and it has been doing so since at least 2012.

What’s more, this kind of work isn’t the exclusive domain of governments. A vast market of for-hire manipulation proliferates around the globe, from Indian public relations firms running fake newspaper pages to defend Qatar’s interests ahead of the World Cup and Israeli lobbying groups running influence campaigns with fake pages targeting audiences in Africa.

So, Russia isn’t alone in this game. But Russian actors participating in information operations should not be painted with a broad brush either. The “Russian playbook” trope paints a warped picture of Russian interference efforts by reducing a complex and evolving network of actors and techniques to the type of campaigns best known by the public: the posts and ads on Twitter and Facebook that were used to inflame American divisions in 2016, which were printed on large posters and held up for senators and constituencies to see during the 2017 congressional hearings on Russian interference.

In reality, the short history of Russian information operations targeting U.S. audiences on social media is much more complex than these few, though vivid, vignettes suggest.

First, the Russian organizations that have engaged in these types of activities comprise a diverse cast of entities that look nothing like one another. The IRA “troll farm,” the organization responsible for the infamous ads and tweets, is essentially a glorified marketing shop. It boasts a graphics department and a search engine optimization department and hires freelancers to craft well-performing social media stories on different platforms, tracking likes and comments. This is fundamentally different from the various units of Russia’s military intelligence agency (known as the GRU), which also crafted fake profiles and accounts to distribute hacked materials and charged narratives across the internet. GRU officers are more Homeland than Mad Men.

There are also serious blind spots in the collective understanding of these various entities. My team at Graphika recently exposed a Russian disinformation actor that no one seems to be able to precisely pinpoint, which we nicknamed “Secondary Infektion.” Who is behind it?—spooks, marketers, students? No one can tell yet, but Secondary Infektion has been active in creating fake profiles on social media to disseminate divisive stories since at least 2014, and more recently attempted to interfere in the 2019 U.K. general election. These campaigns are remarkable for a few reasons, notably for their consistent use of forged documents, and for the fact that their content spread across six years and 300 distinct platforms and sites, targeting multiple countries and audiences across the world.

Second, these different Russian entities don’t simply “apply” a static playbook. They experiment, evolve and adapt, even in the short time frame of these past six years. The IRA provides a clear illustration of how dynamic these techniques can be. The organization’s first years of using social media to target U.S. audiences from afar, in 2014 and 2015, are woven with what seem to be a series of bizarre experiments: Can one use social media posts and SMS targeted at local residents in Louisiana to create a wave of panic around a chemical plant explosion that never happened? Will New Yorkers walk a few blocks for the promise of a free hot dog they read about on Facebook? These experiments weren’t designed for impact: They were seemingly crafted to assess under which conditions people will believe what they see online, or go offline to pursue something they’ve read about on the internet (a good question!). The year 2016 is a turning point in the history of the IRA, when the group moved from assessing behavior to taking frontal action. This was the year of the now infamous election ads, posts, tweets and calls for protests in Facebook groups, the peak of fake influencers and divisive posts.

But the IRA’s story doesn’t end after the 2016 election. Emboldened by its fame on Capitol Hill and in the media, the IRA has continued its trajectory in targeting American audiences. In 2017, Silicon Valley and Washington were awoken to the threat of foreign interference, suddenly paying acute attention to research that had been contributed by a handful of individuals, organizations and investigative journalists on the topic. The pendulum swung from an attitude of “This surely isn’t happening or can’t matter much” to “This is the defining threat of the century.” In response, the IRA started a game of cat and mouse with the investigators responsible for tracking its next moves. When the IRA’s fake profiles and groups were suspended across all social media platforms—albeit in a somewhat disorderly sequence—the troll farm immediately struck back by pitting the Silicon Valley giants against one another. On Twitter, fake IRA accounts complained that Facebook illegitimately censored their (fake) Black activism pages, because “Facebook does a great job supporting white supremacy.” On Google, the IRA discovered the power of ads and bought its way to redirecting its audiences to newly established websites for fake activist organizations that used to convene on Facebook. And 2017 is also the year the IRA diversified the groups it impersonated: The IRA now boasts fake Muslim and LGBTQ groups.

By 2018, the IRA was ready to seize electoral momentum again. In the intervening years since Trump’s election, the group had learned that the reaction to Russian interference in 2017 created just as much chaos and division in the U.S. as the handful of posts the trolls had designed to divide Americans on social media in 2016. This paved the way for a new IRA strategy: meta-trolling, that is, social media campaigns designed to be exposed and covered by the media in order to reignite the divisive and chaotic debate about Russian interference. For example, the IRA accounts that focused on the 2018 midterm elections were accompanied by a website bragging about the operation. The IRA also launched a U.S.-focused outlet, making little efforts to hide the outlet’s connection to the troll farm.

The development and adoption of new techniques doesn’t necessarily mean the old ways are put to rest. In October 2019, my team studied the first IRA campaign to focus explicitly on the 2020 election and to impersonate candidates’ supporters. Not only were the tactics used by the group reminiscent of those used in 2016, but some of the memes from 2016 had simply been recreated and uploaded again on these new pages.

In late 2019, the use of unwitting proxy actors became an increasingly important component of IRA operations. The group itself has evolved and restructured, with more formal and official looking “media-like” endeavors mimicking news publications to hide in plain sight on the one hand, and individuals continuing covert and experimental activities on the other. In March 2020, CNN and Clemson University professors Darren Linvill and Patrick Warren, along with Facebook, Twitter and Graphika, investigated bizarre Instagram posts crafted by Ghana-based groups who targeted Black Americans on behalf of an IRA orchestrator, thinking they were simply engaging with online activism. (Both the Ghanaians and the Americans were unknowingly manipulated by the IRA in this operation, which is why we named it “Double Deceit.”) This use of unwitting participants was a key feature of the latest known IRA operation concerned with U.S. elections: “Peace Data,” a website masquerading as a news source, which Facebook, Twitter and PayPal all investigated after U.S. law enforcement notified them of its Russian origin. Our investigation at Graphika showed that this operation targeted journalists and media directly, reusing media content and recruiting unwitting freelancers to craft articles seeking to delegitimize presidential candidates in the eyes of a left-wing audience. That operation also marked the IRA’s first known use of artificial intelligence to generate fake faces for their online personas and false profiles.

Two months ahead of the 2020 U.S. election, it would be wise to retire tropes such as the “Russian playbook.” The phrase warps understanding of the broader ecosystem of disinformation actors and eclipses the complexity and evolution of Russian active measures online. It conveys a false sense of confidence; there is still a lot that remains unknown about the different facets, techniques and actors of Russian active measures. There are entire actors that researchers and platforms aren’t able to properly identify, and troves of private messages across the years that have been exchanged with people like A., the activist from Charlotte, or with the freelancers recently recruited by Peace Data, without ever being examined by experts, the public or targeted communities.

Menacing tropes about Russian interference are most harmful when they are used to cast doubt on the organic nature of legitimate grassroot movements. Recent, important, organic movements for justice around the world have been accused of being orchestrated by Russian efforts: from the gilets jaunes movement in France to the recent protests for racial justice in the U.S. Democracies need to create their own playbooks to defend themselves from disinformation actors of all kinds, foreign and domestic. In part, that requires an accurate understanding of what the disinformation threat actually is. But the top priority for defending the integrity of political discourse must be protecting those voices and communities targeted for manipulation—particularly those that are most vulnerable to having their ideas and loyalties unfairly discredited.

The focus on the “Russian playbook” may have been helpful in previous years, when people were still struggling to understand the nature of political interference online by foreign actors. But in 2020, the idea of a single playbook used by a single actor is no longer accurate or helpful. Tackling disinformation requires constant humility about what remains unknown, calm in the face of a threat that gets worse if it’s inflated, and attention to both details and individual stories as this landscape evolves and becomes more complex.


Camille Francois specializes in understanding and mitigating harms emerging from digital technologies, and serves as a Faculty Affiliate at Columbia University's Institute of Global Politics at the School of International and Public Affairs.

Subscribe to Lawfare