Cybersecurity & Tech

YouTube’s Bad Week and the Limitations of Laboratories of Online Governance

Evelyn Douek
Tuesday, June 11, 2019, 12:05 PM

The techlash has well and truly arrived on YouTube’s doorstep. On June 3, the New York Times reported on research showing that YouTube’s recommendation algorithm serves up videos of young people to viewers who appear to show sexual interest in children.

Source: Flickr/Kaius Coolman

Published by The Lawfare Institute
in Cooperation With
Brookings

The techlash has well and truly arrived on YouTube’s doorstep. On June 3, the New York Times reported on research showing that YouTube’s recommendation algorithm serves up videos of young people to viewers who appear to show sexual interest in children. In any other week this might have been a huge public controversy, but the news was consumed instead by a very different content moderation blow-up. Centering around the meaning of YouTube’s harassment and hate speech policies and whether a right-wing commentator with nearly four million subscribers had violated them, the week-long saga illustrates how different platforms are developing very different approaches to handling high-profile disputes about what they allow on their services.

What Happened?

It started on May 30, when Vox journalist Carlos Maza wrote on Twitter that a right-wing commentator named Steven Crowder had been routinely mocking Maza’s voice and mannerisms and describing him using racist and homophobic language. According to Maza, the harassment had been ongoing for years and included a doxxing where his phone was bombarded with hundreds of texts, all reading “debate steven crowder.” Maza was clear that his complaint was directed not at Crowder but at YouTube, “which claims to support its LGBT creators, and has explicit policies against harassment and bullying …. But YouTube is never going to actually enforce its policies. Because Crowder has 3 million YouTube subscribers, and enforcing their rules would get them accused on anti-conservative bias.”

After Maza’s thread went viral and received coverage from major news outlets, the @TeamYouTube twitter account responded, saying that the company was “looking into it further.” On May 31, Crowder released a video titled “Vox is Trying to Ban This Channel” (although Maza made clear that his tweets were done in his personal capacity and not on behalf of Vox). YouTube’s response finally came on June 4. In a cursory four-part Twitter thread, YouTube wrote, “Our teams spent the last few days conducting an in-depth review of the videos flagged to us, and while we found language that was clearly hurtful, the videos as posted don’t violate our policies.” To “explain” the decision, YouTube said that “[o]pinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site.” In an email to a journalist, YouTube clarified that it takes into account whether “criticism is focused primarily on debating the opinions expressed or is solely malicious.” Crowder’s videos, the company stated, did not violate the policy because “the main point of these videos was not to harass or threaten, but rather to respond to the opinion” expressed by Maza.

A day later, amid ongoing backlash, YouTube provided a further single-tweet update, announcing that it was demonetizing Crowder’s channel—that is, removing Crowder’s ability to earn money by running ads—“because a pattern of egregious actions has harmed the broader community.” The company linked to a blog post explaining that in rare cases when a creator does something “particularly blatant … it can cause lasting damage to the community, including viewers, creators and the outside world.” In these circumstances, “we need a broader set of tools at our disposal that can be used more quickly and effectively than the current system of guidelines and strikes.” These tools include demonetization and removal from YouTube’s recommendations.

Yet these actions did little to halt the slow-motion fiasco. Maza pointed out that YouTube demonetization would mean little to Crowder, who also makes money by selling merchandise, including a T-shirt reading “Socialism is for F*gs.” YouTube “clarified” that in order to reinstate monetization, Crowder would need to remove the link to selling those T-shirts. After this position stoked further outrage, YouTube apologized for the “confusion” and retreated to the view that the problem was not the T-shirts per se but, again, the “continued egregious actions that have harmed the broader community.” Meanwhile, as many rallied to support Maza, others took Crowder’s side: Senator Ted Cruz denigrated YouTube as the Star Chamber.

It was not until June 5 that YouTube provided anything like a full explanation for its actions on a forum other than Twitter. In a blog post titled “Taking a harder look at harassment,” the company wrote, “These are important issues and we’d like to provide more details and context than is possible in any one string of tweets.”

YouTube has two key relevant policies: harassment and cyberbullying and hate speech. The policy on harassment reads:

Content or behavior intended to maliciously harass, threaten, or bully others is not allowed on YouTube. …

Don’t post content on YouTube if it fits any of the descriptions noted below. …

  • Content that is deliberately posted in order to humiliate someone
  • Content that makes hurtful and negative personal comments/videos about another person
  • Content that incites others to harass or threaten individuals on or off YouTube

For hate speech, YouTube says it will remove “content promoting … hatred against individuals” or using stereotypes to promote hatred based on attributes including nationality, race and sexual orientation.

YouTube’s ultimate blog post elaborated on these guidelines. For harassment, the question is whether the purpose of the video, taken as a whole, is to harass or humiliate. For hate speech, the same question: whether the “primary purpose” is to incite hatred. In either case, “using racial, homophobic, or sexist epithets on their own would not necessarily violate either of these policies.” Although it appeared that moments from the supercut Maza originally posted were facially inconsistent with YouTube’s policies, YouTube had decided that, in context and considered as a whole, Crowder’s videos were not harassment or hate speech. But it also promised to take “a hard look at our harassment policies with an aim to update them.”

In an interview on June 10, YouTube CEO Susan Wojcicki explained why the company had made the call to leave the videos up but demonetize them, saying that the company had a “higher standard” for creators who earn money from their videos.


The Meaning of YouTube’s Community Guidelines

The whole controversy centers around the meaning of YouTube’s community guidelines. This is true in two senses. The first is about the operational meaning of YouTube’s policies and whether the Crowder videos violated them. But the second, deeper debate is about what it means for YouTube, a private company, to have these guidelines at all. Whatever your view of the underlying substantive issue of what YouTube should or should not support on its platform, last week’s events raise a number of fundamental issues about online governance. Whether or not YouTube should adopt policies about hate speech and harassment on its platform, the fact is that it has adopted those guidelines and purports to take actions based on them.

But despite the presence of these policies, YouTube failed at every step of the Maza-Crowder debacle to communicate the basis of its actions clearly. Its initial conclusory tweet that Crowder’s videos did not violate YouTube’s policies were hard to square with the language of those policies. On their face, the videos Crowder published were “deliberately posted in order to humiliate” Maza and made “hurtful and negative personal comments” about him. The very fact that it took YouTube four days to respond suggests that this was not an easy call and, therefore, deserved more explanation. The partial flip-flopping regarding monetization and the selling of T-shirts was unpredictable and inexplicable. The company’s ultimate position, that Crowder’s videos did not violate its policies but “have harmed the broader community” is opaque and provides little guidance for users about what content YouTube will penalize.

Meanwhile, the limited explanations the company did give seem poorly thought through. Epithets framed as a “debate” or “comedic routine,” YouTube’s comments suggested, do not violate its policies. But this creates a standard so subjective as to seem unworkable in practice. YouTube says that its guidelines are about fostering the “trust” involved in keeping the YouTube community “fun and enjoyable for everyone.” It counsels people to take a purposive approach to interpreting them: “Don't try to look for loopholes or try to lawyer your way around the guidelines—just understand them and try to respect the spirit in which they were created.“ But despite its own advice not to “look for loopholes” its own guidelines, YouTube tied itself in knots, creating the appearance that the community guidelines are not so much rules as empty words that the company can interpret however it chooses.

These events showed a hard truth: Without government regulation, there is nothing requiring YouTube to set or abide by clear policies it holds out as being the rules of its platform. The past few years have seen growing momentum behind calls for platforms to uphold the human rights of their users, including their rights not just to freedom of expression but also to due process. Many of the complaints about YouTube’s handling of the Maza-Crowder situation focused on the deficient processes and explanations that accompanied the company’s actions. The Verge’s Casey Newton called on YouTube to “have these arguments with us in public.” Former Facebook Chief Security Officer Alex Stamos said that YouTube needed “much more transparency in how these decisions are made. They need to document the thinking process, the tests they are using and the precedents they believe they are creating.” Current company employees bemoaned the public trust that is lost when platforms fail to explain their decisions. The UN Special Rapporteur on freedom of expression suggested affected users deserved a more fully reasoned response than YouTube’s initial conclusory statement that Crowder’s videos did not violate its policies.

Even if it would be normatively desirable for content moderations to be transparent and accountable, there is no lever in the current legal landscape to enforce these calls. I have written before about proposed regulations in other countries that may change this. But currently, as Sarah Jeong put it in the New York Times, “YouTube is entitled to shoot entirely from the hip.” Nothing mandates YouTube to provide an avenue to productively channel or manage user grievance.

Laboratories of Online Governance

Justice Louis Brandeis famously praised federalism as allowing states to be laboratories of democracy, trying different approaches to rules to discover what works best. The governors of large online spaces now appear to be taking a similar approach. Although there remain superficial similarities between the large social media platforms, their approaches to dealing with the difficulties of content moderation look set to diverge in important ways. In response to many of the same criticisms YouTube faced in the past week, Facebook has announced it will create an oversight board, independent from the company, to hear appeals to its content moderation decisions with the goal of bringing exactly the kind of transparency and accountability to Facebook’s content moderation ecosystem that YouTube lacked.

Where YouTube’s decisions appeared motivated by commercial considerations, with observers speculating some accounts become “too big to fail,” users might have more confidence that well-designed independent oversight would be concerned first and foremost with the application of the publicly available rules. Where YouTube seemed ill-prepared to defend its actions, resorting to confusing tweets, an oversight board’s purpose would be to hear disputes and provide public reasoning for decisions. And these reasons would be focused on establishing workable precedents (rather than subjective standards such as whether a slur was done in the course of “debate”) to create a coherent body of platform law. With proper institutional design, the decision-makers would be less susceptible to public pressure, which might prevent disputes from descending into such painful online battles between opposing camps. All of this would engender greater public legitimacy in the platform’s policies and decisions.

Given these benefits, there was speculation last week that YouTube might also adopt a similar institutional mechanism. But the public legitimacy of creating independent oversight comes at the cost of allowing business reasons to govern those decisions. Ultimately, the Facebook oversight represents a bet: that the public legitimacy it might create by introducing a check and balance into its content moderation system is a good long-term investment to stem public controversy and create user buy-in for the platform’s rules. YouTube’s actions last week suggest a different gamble. The ad hoc, reactive and short-term thinking that seemed to pervade YouTube’s response indicates public legitimacy is not a governing concern for its decision-makers.

YouTube’s wager last week was not new, but until now the payoffs have seemed to go the other way. YouTube’s parent company, Google, did not even send a representative to Senate committee hearings with social platforms about foreign influence operations on social media, leaving Facebook and Twitter representatives to take most of the heat. When a video of House Speaker Nancy Pelosi, altered to make her appear drunk, started circulating on social media recently, Facebook decided not to remove the videos and rolled out a high-level executive to explain the decision on national television. Facebook was widely criticized for both its decision and the executive’s attempts to explain it. Meanwhile, YouTube quietly took down the video. The ultimate explanation YouTube gave—that the Pelosi video violated the company’s deceptive practices policy—was what Newton generously described as “a little slippery.” Yet YouTube largely escaped public scrutiny—only to face a firestorm of criticism around the Maza-Crowder issue weeks later.

In principle, however, YouTube’s lack of transparency and accountability regarding its decision to take down the Pelosi video should be as problematic as the Maza-Crowder decisions. Similarly, YouTube’s pro forma statement that it had decided to remove Alex Jones’s channel without explanation, right when a number of platforms were suddenly making the same decision, was, by the same standards, also procedurally illegitimate.

Among the theories animating the current push to break up the large social media platforms is the notion of “laboratories of online governance”—the idea that competition will allow greater experimentation and allow “healthier, less exploitative social media platforms” to emerge. This may be so. But YouTube’s actions over the past week are a case study in showing that, without regulatory guardrails, there is no guarantee that the platforms that emerge will focus on legitimizing the way they exercise their power.

On June 5, in the middle of the ongoing fallout from Maza’s complaints, YouTube released a new policy on hateful and supremacist content, announcing that it was expanding its rules to ban videos promoting Nazi ideology, Holocaust denial or Sandy Hook conspiracy theories. The platform stated in the announcement that “context matters,” so condemnation or analysis of hate could stay up. But in the rollout of the new rules, YouTube removed benign content including educational videos from a history teacher that included archival Nazi footage and an independent journalist’s videos documenting extremism.

At the end of a very long week for YouTube, many unanswered questions remained. Chief among them is what the value of YouTube’s new policy will be if the community doubts YouTube has either the capacity or the intention to actually enforce the policy as written.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare