Cybersecurity & Tech

Twitter Brings Down the Banhammer on QAnon

Evelyn Douek
Friday, July 24, 2020, 2:56 PM

The conspiracy theory posed genuine danger, but Twitter’s action does not signal a new era of accountability for big technology platforms.

A crowd waits to enter a Trump campaign rally in Minnesota on October 10, 2019. (Source: Tony Webster, https://flic.kr/p/2hQ17sg, CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Are the days of the Wild Wild Web over? In recent weeks, social media platforms have unveiled a series of high-profile enforcement actions and deplatformings. All the major platforms rolled out hardline policies against pandemic-related misinformation. Facebook banned hundreds of accounts, groups and pages associated with the boogaloo movement, Snap removed President Trump’s account from its promoted content and YouTube shut down several far-right channels, including that of former Ku Klux Klan leader David Duke. And the hits keep coming: most recently, on July 21, Twitter announced it was taking broad action against content related to the conspiracy theory QAnon.

But however welcome Twitter’s response to QAnon may be, these actions do not signify a new era of accountability in content moderation. If anything, it’s a show of how powerful and unaccountable these companies are that they can change their policies in an instant and provide little by way of detail or explanation.

Twitter’s announcement about QAnon content was indeed sweeping. More than 7,000 accounts were taken down, and another 150,000 were prevented from being promoted as “trending” on the site or as recommended accounts for people to follow. URLs “associated with” QAnon are now blocked from being shared on the platform. QAnon accounts immediately started trying to come up with ways to evade the ban, kicking off what is sure to be an ongoing game of cat-and-mouse, or moved to other networks.

Generally speaking, Twitter has no rule against tweeting falsehoods or inaccuracies. Espousing conspiracy theories and outright lies don’t breach its terms of service, consistent with platforms’ keen desire to avoid becoming “arbiters of truth.” That’s why the moves against QAnon feel like another watershed moment for a platform that is having quite the year, not least because it is in a seemingly escalating showdown with the president. The overall message Twitter wants to send seems clear: we are no longer the “free speech wing of the free speech party”; instead, “we will take strong enforcement action on behavior that has the potential to lead to offline harm.” It’s time to draw a line. (Or, perhaps more cynically, “We’re no Facebook.”)

But what line is Twitter drawing, exactly? In all the press coverage around Twitter’s action against QAnon, there was little clarity about what the policy would actually cover going forward.

Twitter said it was acting against QAnon under a new designation of “coordinated harmful activity” but provided no specifics about what its new label meant. Experience with other similar platform-created lingo—like the now-ubiquitous “coordinated inauthentic behavior,” used most often in the context of election-related meddling—teaches that often what the public thinks these designations should mean and what platforms think they mean are two very different things. Twitter said that the FBI’s designation of QAnon as a potential domestic terrorist threat influenced its decision, and yet the enforcement action was not done under its rules against violent groups (nor has the platform identified who is covered by that rule).

Likewise, Twitter did not explain what it meant when it said it was banning URLs “associated with” QAnon—a broad and ambiguous category that will no doubt be worked out through trial and error. QAnon is sprawling, incoherent and constantly evolving. Does the new ban cover all Jeffrey Epstein conspiracy theories, which QAnon believers have helped promulgate? How much musing about a “deep state” working against Trump—a feature of both QAnon and Republican politics more broadly—is “associated with” QAnon? Will Twitter be able to keep up with the pace at which the group concocts new paranoias, like whether the retail company Wayfair is facilitating child trafficking through the sale of pillows and cabinets?

On the other hand, it’s not clear that the new policy is about the content of QAnon believers’ tweets at all. Instead, Twitter’s action against QAnon accounts seemed more closely related to recent coordinated abuse campaigns against individual victims. That is, perhaps Twitter was acting not because of anything to do with the baselessness of the QAnon conspiracy theories or the stunning momentum of the movement, but because adherents were “swarming” or “brigading” other users, coordinating targeted campaigns against their accounts involving violent threats and other harassment. Banning this kind of abusive activity would be a massive and welcome step forward for Twitter, which has long been too tolerant of rampant harassment and abuse. But Twitter already has rules against harassment: what exactly was new now? One of the victims was model and social media personality Chrissy Teigen, who was threatening to abandon the platform—which, given the 13 million users who follow the unofficial Queen of Twitter, is no small threat. It’s unclear whether Teigen’s outspokenness about her harassment played a role in Twitter’s sudden and sweeping move. If the “coordinated harmful activity” policy is indeed going to be focused on this kind of harassment, that’s welcome—but this does suggest that those who hope it signals an intention on Twitter’s part to start stamping out conspiracy content more generally will be disappointed. Nor is it clear how Twitter will decide what constitutes “coordination.”

These missing details are not just inconsequential nitpicking. The first round of press omitted a key detail: Twitter would not be automatically including candidates or elected officials under the new rule. This is unfortunately significant, because over 60 congressional candidates have now expressed support for QAnon, with 14 already on the ballot in November. The president himself has frequently retweeted and shared QAnon memes. High-profile and high-follower accounts like these are often highly influential in helping spread this content. In other words, Twitter’s quiet clarification revealed quite the loophole.

All these details largely escaped scrutiny in a week when Twitter really needed a good news cycle. The main concerns focused on by reporters involved why it had taken so long, and when other platforms would follow suit.

This lack of clarity is not unique to Twitter. When Reddit kicked around 2,000 communities off its platform as part of a crackdown on hate speech, the CEO got days of good press and a New York Times interview—despite its new policy being odd and ambiguous, and requiring a quiet update shortly after. In the wake of the Black Lives Matter protests and with platforms seemingly out-competing each other to signal how seriously they were taking hateful conduct, Twitch earned plaudits for suspending Trump’s account on the basis of footage from old campaign rallies. Yet when Twitch quietly reinstated the account a few weeks later, that decision received basically no attention. Likewise, all the major platforms have rolled out unusually heavy-handed policies about pandemic misinformation and acknowledged that they would make more mistakes in enforcement during the spread of the coronavirus, but there has been no accounting of how effective platforms have been at cracking down on misinformation, the scale of any errors or who bore the brunt of them.

Many of these actions are good, long overdue outcomes. QAnon is dangerous and unmoored from reality. And in a further positive development, it now seems inevitable other platforms will similarly crack down. Fixating on policy details may seem like losing the forest for the trees.

But unquestioning good news cycles also run counter to years-long efforts to make platforms be more transparent and principled about their rules. How platforms moderate matters. Explaining and justifying rules and their enforcement can help people come to accept platforms’ actions, even if they disagree with those decisions. More importantly, being clear and transparent about rules can entrench standards of content moderation for the long term. Clear policies become tools through which users can hold platforms to account, and umbrellas for the less-powerful to seek protection under—so they are there to be called on for different groups, or once the news cycle has moved on or when the victim of harassment isn’t the Queen of Twitter. In the context of conspiracy theories—which thrive on ambiguity and perceived victimhood—such clarity is especially important. This is beneficial for platforms, too: policies should be both the justification and shield for their decisions. When rules are clearly announced in advance and then consistently applied, this helps fend off charges of bias.

One might argue that this is all hopelessly naive: perhaps policy enforcement will only be as strong as the level of public outrage that can be generated in any particular case. This has indeed been the case for much of the history of content moderation, where policies lie fallow or inconsistently enforced until platforms are publicly called out. But the last few years have pushed platforms towards more transparency, more explanation and justification and ultimately more accountability for the decisions that platforms make that profoundly shape the public sphere. That project is too important to give up on just because some recent content moderation decisions have what many perceive to be salutary outcomes.

To be sure, the unusually interventionist approaches of platforms in recent months do show that things have changed. Without clear standards, though, this is not tech accountability. It remains lawlessness. These banhammers—that is, the discretionary blocking of users to clean up platforms—are still displays of arbitrary and unaccountable power. Users of these platforms, and the broader public, deserve more from the overlords of the online public sphere. In the meantime, don’t be surprised if you see a lot of things on Twitter that sure do look like “coordinated harmful activity” but aren’t taken down because they apparently don’t fall within this new rule—whatever that rule is.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare