Cybersecurity & Tech

Substack’s Curious Views on Content Moderation

Jacob Schulz
Monday, January 4, 2021, 4:13 PM

The popular email newsletter platform released a blog post about their content moderation philosophy. It's an interesting but flawed document.

A person holding a cell phone. (By: Tati Tata, https://tinyurl.com/t5x5upw; CC BY 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Online speech platforms can all count on controversy over speech and content moderation.

It’s what Evelyn Douek calls the “inevitable lifecycle of a user-generated content platform.” Platforms start with a small base of loyal users, but with growth comes scrutiny. Maybe there’s a particular piece of content on the platform that catches public ire. Maybe there’s a general sentiment that the platform turns a blind eye to dangerous content. And then critics, and sometimes users, start to push the company to start more aggressively moderating the content it hosts. Beef up the terms of service. Pay more attention to hate speech. Be transparent about rule-making.

Substack has now entered this phase of its life cycle as a popular internet platform. The company, which runs email newsletters for journalists and “creatives,” has begun to take some flack for the speech it hosts. Part of the concern comes from particular Substack posts. But there’s also a more general concern. It’s a company that gets branded as a refuge for the “censored.” Free from editorial control at traditional media companies, what types of posts might end up on Substack? And how might Substack reconcile its growing reputation as “a corrective against growing intolerance of heterodoxy” with the inevitable need to get some extreme content off its service?

Substack made its own intervention into the conversation on Dec. 22. The company’s three co-founders released a blog post that details “Substack’s view of content moderation.” Chris Best, Hamish McKenzie and Jairaj Sethi acknowledge the reality of the “inevitable lifecycle,” writing, “[a]s Substack grows, there is increasing interest in the stance we take on content moderation.” But the founders seem to be reading Justice Oliver Wendell Holmes (or at least some pre-2016 blogs) and try to plant Substack’s flag firmly in the sand: “We prefer a contest of ideas. We believe dissent and debate is important.” Thus, “[a]ll things in moderation—including moderation.”

Parts of the blog post contain thoughtful, if unrevolutionary, meditations on the incentive structures that lead to polluted information environments. And the post is revelatory about how this for-profit company thinks about moderation. But the post also elides the company’s own incentives for virality. And it’s a strange document to read in light of the fact that Substack very much does do content moderation. The real question going forward isn’t whether Substack will become “the moral police”—it’s whether and how it will enforce and clarify its own rules.

Substack deserves credit for releasing the blog post in the first place. It’s valuable for a young speech platform to release a public text—even a flawed one—upon which it can ground future moderation-related decisions. As Douek wrote of Substack, “Every platform needs to build its view of content moderation in from the beginning and should be upfront about it.” Facebook, for example, has suffered from trying to build a content moderation regime on top of a foundation bereft of any real moral doctrine. And openness about decision-making plays an important role in ensuring the legitimacy of any content moderation system. Polling from the Knight Foundation, for example, found appetite among respondents for transparency from platforms about content moderation. Anna Wiener detailed in the New Yorker that Substack’s founders themselves make all content moderation decisions internally, and one of the founders told her that Substack doesn’t comment on the decisions; transparency is clearly an area of growth for the company, and the blog post is a step in the right direction.

The candor in certain parts of the post provides rare, mask-off clarity about the type of thinking that undergirds content moderation at a for-profit organization. Platforms might have principled reasons to pursue a particular content moderation approach. Maybe a platform thinks it’s a moral good to establish and aggressively enforce robust rules, for example. But maybe it also sees content moderation (or lack thereof) as another way to respond to consumer demand. And for all the high-minded stuff in the blog post about the “contest” of ideas and the importance of “free speech” in “help[ing] us survive as a society,” the founders don’t hide their ultimate justification for taking a hands-off moderation approach. They write of their content moderation philosophy: “We welcome competition from anyone who thinks we’re wrong about this …. We are happy to compete with ‘Substack but with more controls on speech.’” At the end of the day, they imply, the ultimate judge of the merits of a particular content moderation approach is the market. A moderation style is “right” if it attracts consumers and lends itself to a monetization strategy. Prove we’re wrong about this remarkably complicated question by raising more Series A funding, they dare. This underlying point has flaws—the marketplace of ideas hasn’t exactly fostered a healthy 21st century information ecosystem—but it’s a helpful clarifying moment; they say the quiet part out loud.

The blog post does contain some valuable reflections about the differences between various speech platforms, but the observations end up reading as reductive.

The founders assert, correctly, that “Substack is different from social media platforms.” They bemoan that critics often lump Substack in with platforms like Twitter, YouTube, Facebook and Instagram. Substack stands out from those behemoths for a couple reasons, they assert. It doesn’t rely on an algorithm that curates feeds “designed to maximize engagement.” Instead, “readers choose what they see.” The founders frame the archetypical speech relationship on its service—reader-to-writer—as a private compact. “A reader makes a conscious decision about which writers to invite into their inboxes” (emphasis added). So Substack isn’t the public square. It’s more like how people once may have “invite[d]” a thinker or polemicist into their home for a discussion. This framing of the speech dynamics at play bleeds into the platform’s attitudes about moderation. Wiener notes that one of the founders “has suggested that Substack contains a built-in moderation mechanism in the form of the Unsubscribe button.” As Quinta Jurecic characterized it to me, it’s moderation by kicking someone out of your house.

The basic observation reflected in this thinking is uncontroversial: Different services have different levels of involvement in connecting users with a particular piece of content. Forums, for example, inflect the journey from poster-to-reader in limited ways: They create certain subgroups that dictate where users can post, or maybe they have a front page collating all the “new” posts. Then there is Facebook, whose algorithm operates like a ceramicist sitting at a pottery wheel: A stream of content inputs—some from pages “liked” by the user, some not—form the clay, and the platform’s proprietary algorithm does the work of shaping the final product. This puts the platform between the content and the user; the platform has significant agency in what users see. Some argue, like the Justice Department has, that the “use of proprietary algorithms” even “blur[s] the line between first and third-party speech.”

But the post shies away from acknowledging Substack’s role in the poster-to-user transaction. It’s true that Substack exerts less influence than Facebook or Twitter in meting out content to users. But Substack does try to have some impact. If you link your Twitter account with the service, for example, Substack will send you emails suggesting that you subscribe to writers whom you follow on Twitter. The service also has a centralized “Discover” feed that (surely by way of an algorithm) recommends new “Featured” writers to you. In other words, Substack’s proprietary algorithm and marketing emails do shape (or at least try to) what readers see. Substack also pays generous advances to high-profile writers and gives some contributors access to health care or a legal defense fund. This all means, as Weiner writes in the New Yorker, “Substack has made itself difficult to categorize.” It’s not a social media company, but it’s, as she writes, “a software company with the trappings of a digital-media concern.” The blog post makes the service sound more passive than it really is.

Then there’s the question of how Substack makes its money. The post describes how the service’s financial model flows from the writer-to-reader relationship. Digital advertising, the post laments, incentivizes “autoplaying videos, trending tabs, and clickbait” and “sensational material and conflict-driven exchanges.” Substack doesn’t make its money through ads. Instead, the platform takes a 10 percent cut from user subscriptions to individual writers. This set up, the blog emphasizes, means that reader-to-writer trust matters more to Substack’s bottom line than “engagement.”

Again, there’s some truth to this. Certain platforms have greater and lesser incentives for hosting inflammatory speech. Facebook’s algorithm hunts for posts that “stir[]” users: the “rage-clicks” and “pile-ons.” Facebook users often don’t care about the author of an individual post and experience the platform through News Feed, a continuous flow of stimuli constructed to try to keep a user scrolling and “liking.” Substack readers, by contrast, mostly experience the site’s content as (often very long and weedy) emails sent from a writer to a subscriber. It’s an entirely different “business model.” And the founders are right that this model requires some level of user “trust” in Substack writers; for Substack to make money, users have to feel enough connection to or “trust” in writers to keep paying for their content. It’s a bold act of defiance to pay $5/month to “hate-read” a bad-faith Substack newsletter.

Yet this ignores an inconvenient truth for Substack: The platform has plenty of incentives to host inflammatory content. No, the service doesn’t use ad revenue to make money and doesn’t rely on an engagement-based algorithm. But Substack needs the social media algorithms to grow its brand and the audiences of its writers. Weiner references a Substack author named Reggie James who tells her that “about half his readers come through social networks.” She writes, “[a]s long as writers were beholden to the logic of social-media algorithms, [James] said, Substack was still ‘playing the game of the platforms.’” This means that “rage-clicks, hate reads,” clickbait controversies and “conspiracy theories” play an important role in expanding public awareness of the brand and of its writers. Substack got a huge publicity boost this fall, for example, when Glenn Greenwald, the disaffected founding editor of The Intercept, launched his Substack with a post pushing baseless claims about Joe Biden’s son. Greenwald’s post, which accused his editors at the Intercept of “censor[ing]” him, captivated Twitter; and Greenwald now has the sixth highest grossing politics newsletter on Substack. The flare-up ballooned Substack’s public visibility.

A Google Trends graph of the frequency that people searched “substack” in Google. The peak corresponds to the Oct. 29 release of Greenwald’s column.

The platform also used to have an all-encompassing “Leaderboard” but has since removed it in favor of individualized leaderboards for different subcategories of newsletters. The power rankings can sometimes advantage bad behavior; take, for example, a Substack essay awash with voter fraud disinformation that climbed to number four on Substack’s now-defunct leaderboard for free publications.

And then the blog post kind of gives the game away, acknowledging that Substack does, in fact, have some rules. “Of course, there are limits,” they concede. “We do not allow porn on Substack, for example, or spam. We do not allow doxxing or harassment. We have content guidelines (which will evolve as Substack grows).” This concession makes the blog post read as pretty equivocal. How might actually having rules square with the bluster about wanting de minimis content moderation? The post stresses that the guidelines are merely “narrowly construed prohibitions with which writers must comply”—but that’s effectively true of all content moderation, the only question being how narrow the prohibitions really are and whether they are enforced. The argument seems to be that the rules are so circumscribed that they won’t empower any real mobilization of the terms of service police.

But I’m not sure I buy that. Platforms have long struggled to define what “porn” and “harassment” mean for purposes of moderation. Substack’s content guidelines also include prohibitions for “hate” content and “content that promotes harmful or illegal activities,” both of which resist any sort of easy operational definition. Any one of these rules could lend itself to expansive moderation. Might the rule regarding “content that promotes harmful or illegal activities” provide grounds to ban a Substack dedicated to spreading baseless lies about coronavirus vaccines? Could a Substack writer who denigrates Black Lives Matter protesters run afoul of the prohibition against “serious attacks on people based on their race?” The rules, if enforced, could very quickly put Substack into a position in which bad-faith critics tar them as engaging in “censorship” and betraying the hands-off commitments of the platform. The current content guidelines lack the specificity to clear any of this up: Each rule has only a single paragraph, roughly 50-word explanation of what’s not kosher.

Recall Wiener’s New Yorker reporting that the founders themselves make all moderation decisions. The founders write in the blog post that “[w]e do not seek to impose our views in the form of censorship or through appointing ourselves as the judges of truth or morality.” The rules don’t necessarily put them in a position where they have to judge “truth or morality,” but they will have to put on their judge’s robes and make really difficult margin calls about whether a post violates the vague content guidelines.

The concession that Substack does have its limits reveals a misapprehension about content moderation and the challenges it raises. The founders write, of the potential detractors to their argument:

There are no doubt some people, alarmed by the events of recent history, who will argue that Substack should put free speech concerns behind a need to cultivate a more controlled community that can guarantee safe spaces to all involved. Some people will argue that we should cultivate a community of writers and ideas that fall within a narrow window of a specific conception of respectability; that we should embrace the role of moral police (as long as it conforms with their views).

Maybe. But this caricatures what people might actually want from Substack. Do the founders really think anyone is going to argue that Substack should restrict its roster of writers to those who fit into “a narrow window of a specific conception of respectability”? Does anyone really want them to be the “moral police”? Asking Substack to remove voter fraud disinformation is not asking for a “safe space” or for the intervention of the “moral police.”

This distortion, in the context of the rest of the post, also caricatures content moderation itself. Moderation does require making decisions that inherently implicate “politics” in the most general sense, but it doesn’t necessarily mean deploying the terms of service as white blood cells to inoculate a platform against speech that falls on the wrong side of the culture war. Moderation, at its core, is establishing terms of service and enforcing them. Many of these terms will relate to unsexy things like copyright or personal identifying information. And it’s hard. It’s easy to define moderation as something “bad” and then say that “we won’t be doing that.” It’s much harder to work to iterate the terms of service so that they set limits sufficiently clearly, to lay down the law even when it may cause pushback, and to develop a transparent enforcement system that doesn’t rely on the intuition of three guys.


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare