Cybersecurity & Tech Terrorism & Extremism

Tech Companies Must Fight White Supremacy, Regardless of Political Dangers

Joshua A. Geltzer, Karen Kornbluh, Nicholas Rasmussen
Wednesday, August 7, 2019, 5:07 PM

Responding to the recent bloodshed in El Paso and elsewhere, President Trump laid heavy blame on the internet and then invited social media companies to a White House summit to be held on Aug. 9 to discuss efforts against online extremism.

Three computer screens lit up in a dark room. (Source: Flickr/Sven, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

Responding to the recent bloodshed in El Paso and elsewhere, President Trump laid heavy blame on the internet and then invited social media companies to a White House summit to be held on Aug. 9 to discuss efforts against online extremism. While the president’s diagnosis of the internet’s contribution to extremist violence was heavy-handed and imprecise as he alluded vaguely to the “the perils of the internet and social media,” there was a kernel of truth in Trump’s remarks: Leading technology companies can and should do more to fight the spread of white supremacy on their platforms, just as they have stepped up their efforts against radicalization by the likes of the Islamic State and al-Qaeda.

The irony is that, if those companies do what Trump is urging, they’re likely to run into baseless allegations by the president and his allies of “anti-conservative bias.” That’s because these political actors will see some of their online messaging, which at times overlaps with that of far-right extremists, removed. But that’s no reason to hold back. With lives on the line, tech companies must work to thwart violent white supremacist activity. They should act with clarity, consistency and transparency, all while affording appeal rights.

Tech companies like Facebook, Google (which owns YouTube) and Twitter have augmented their efforts against international terrorist groups like the Islamic State and al-Qaeda over the past half-decade—especially since President Obama challenged them publicly to do better after the December 2015 San Bernardino terrorist attack. The companies are now taking down terrorist content faster than before, preventing accounts spreading terrorist content from regenerating more effectively and overall contesting the virtual safe haven that these terrorists once enjoyed online. It’s not perfect, but it’s progress.

Those same companies, however, are struggling to address the surge of violence by domestic terrorists. In particular, platforms find it difficult to determine when it is appropriate to restrict access to their services. As tragedies continue to mount, the companies need conceptual clarity regarding how they govern their platforms.

The first step must be to create information-sharing networks with law enforcement on white supremacist terror threats, as tech companies and the government already have built for international terrorism. These must be narrowly focused on sharing only information appropriate and relevant to specific violent extremist activity online at the unclassified level.

Second, application and enforcement of the platforms’ terms of service today are inconsistent. Too many accounts are still online purveying the inherently hateful, discriminatory, and downright dangerous ideologies associated with neo-Nazism, neo-Confederacism and other forms of white supremacy. The platforms should make clear they will establish a zero-tolerance policy for clearly illegal activity including incitement to violence—and will report to the FBI where they see such criminal activity, as they report child pornography. They should clarify that they will take down content that embraces and echoes the underlying ideology of the white supremacist terrorists—like that shared by recent terrorists from Christchurch, New Zealand, to El Paso, Texas—and also clarify under what circumstances they will take down the accounts themselves.

Platforms should also establish that their terms of service do not permit any hate speech—even when shared by public figures. Facebook signaled with its recent policy update that it would act more aggressively against this activity. Figuring out precisely what qualifies as hate speech can be difficult—Facebook has rightly faced criticism for insisting that it will include “white nationalism” as hate speech only when the speech specifically uses those words—but committing to the principle and learning from outside critiques when mistakes are made are key steps down the road of corporate responsibility. Moreover, because white supremacists use the major tech platforms to spread their content—including so-called manifestos purporting to explain acts of violence—after posting such content to smaller sites like 8chan and Gab, the major platforms should join forces to develop and implement best practices to stop linking to such content hosted on sites that consistently and knowingly permit illegal and terrorist content. Already, civil rights groups are calling out Twitter for continuing to provide 8chan, the online forum where the El Paso shooter posted a hateful message before the attack, with a verified account.

Third, it is important to recognize that there will be complex cases. Online platforms have struggled with the question of when to take down an entire account or set of content in the context of international terrorism. Take Anwar al-Awlaki, the U.S.-born al-Qaeda leader killed in a 2011 drone strike: His online propaganda is still consistently found in the possession of jihadist terrorists, especially English-speaking ones. For years, YouTube distinguished between “bad Awlaki” videos explicitly urging violence and “okay Awlaki” videos “merely” preaching jihadism’s ideological underpinnings, even though all Awlaki videos clearly pointed—by design—toward allegiance with al-Qaeda. YouTube eventually updated its policies to determine there are only “bad Awlaki” videos.

Under the First Amendment, the government would face limits in what it could do to suppress the speech of a domestic terrorist. But tech companies do not face the same constraints. To reduce the risk of abuse, the platforms should be accountable to the public for these actions, however, through spelling out their rules and adding transparency measures and rights to speedy appeal.

Fourth, updating company policies is important, but it’s just the beginning: It is even more important to implement vigorously those new policies. Tech companies have said that their comparative success in combating Islamic State-disseminated content on their platforms owes much to the companies’ increasing reliance on technology to identify such content swiftly and prevent its reemergence once identified. That includes utilizing artificial intelligence to “learn” from previous examples what Islamic State content looks like so as to spot new videos, recordings and images even before those are flagged by users, then refining that artificial intelligence when it proves—as it often does—overly inclusive. In the context of domestic terrorism, the platforms must work with law enforcement to determine if there are useful indicia of terrorist content; this content will likely require human review to screen for or at least correct false positives. Moreover, the companies must invest more in hiring, training and better supporting moderators. To minimize both abuses and accusations of bias, the companies will need to be more transparent about what they take down and why, as well as offer speedier, more transparent appeals processes.

Fifth, combating domestic terrorism poses unique political risk for tech companies: In policing this content, unlike in the case of Islamist terrorist content, the companies will inevitably remove some material created or disseminated by far-right political commentators and even politicians. The same overlap in language many Americans saw with horror between the suspected El Paso terrorist and President Trump himself could yield takedowns of content from prominent figures—perhaps even the president of the United States. That will be awkward for the technology companies and may exacerbate the claims of political bias by Trump, Sens. Ted Cruz and Josh Hawley, and others—claims that have led Trump, in particular, to attack tech companies relentlessly for purportedly reducing his online followers, silencing his allies’ speech and suppressing news favorable to him. With Trump already lashing out at Silicon Valley as allegedly hostile to right-of-center politics, the removal of content that fuels white supremacy but also reflects his own political rhetoric risks further aggravating the president. And the heat won’t be limited to the United States: In Germany, for example, political figures already have found some of their online messages triggering the tech companies’ scrutiny.

When there are lives on the line, political awkwardness must give way to responsible corporate behavior and good stewardship of the internet. Reliance on law enforcement, a sound understanding of the ideological underpinnings of terrorist activity, transparency and accountability can be the digital platforms’ best friends as they go all in on thwarting white supremacy online.


Joshua A. Geltzer is the Deputy Assistant to the President and Deputy Homeland Security Advisor Executive Director. He was previously Visiting Professor of Law at Georgetown University Law Center's Institute for Constitutional Advocacy and Protection and a Fellow in New America’s International Security Program.
Ambassador Karen Kornbluh is Senior Fellow and Director of the Digital Innovation and Democracy Initiative at the German Marshall Fund of the US. She served as US Ambassador to the Organization for Economic Cooperation and Development and in senior roles at the Treasury Department and the Federal Communications Commission. Kornbluh was Executive Vice President at Nielsen and began her career as an economic forecaster and consultant.
Nicholas Rasmussen was Director of the National Counterterrorism Center from 2014-2017 under Presidents Obama and Trump. He currently directs the National Security and Counterterrorism program at the McCain Institute for International Leadership in Washington DC and is a Professor of Practice with the law school at Arizona State University (ASU).

Subscribe to Lawfare