Cybersecurity & Tech

The Justice Department’s Good Ideas for Platforms Needn’t Be Done Through Section 230 Reform

Mark MacCarthy
Friday, June 26, 2020, 11:27 AM

The Justice Department’s recently released plan to reform Section 230 has drawn predictably partisan reactions. But the report includes a couple of wise ideas.

A cell phone. (By: Tati Tata, https://tinyurl.com/t5x5upw; CC BY 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The Justice Department’s recently released plan to reform Section 230 has drawn predictably partisan reactions. Democratic Sen. Richard Blumenthal, for instance, rejected it, saying, “I have no interest in being an agent of Bill Barr’s speech police.”

Section 230 of the Communications Decency Act was passed in 1996 as part of an unrelated law to reform the telecommunications industry. It does two things. It says online companies, including social media platforms, are not liable for publishing the speech of their users, and it says they are not liable if they act in good faith to remove content they think is objectionable. The law has been credited with allowing online companies to grow without fretting over the burden of expensive litigation because of the postings of their users. But increasingly partisan calls for reform divide Democrats, who want reform to force social media companies to remove more harmful content like disinformation and hate speech, from Republicans, who think reform will end what they see as social media bias against conservatives.

But abstracting from the inevitable political theater allows policymakers, advocates and scholars to see that the Justice Department’s report contains thoughtful, constructive responses to two fundamental questions connected with content moderation activities of online platforms. There are a lot of different proposals in the Justice Department’s report. Below are my comments on a few that I found particularly worthwhile.

One worthwhile question the report ponders is whether users should have a right of access to these platforms if the companies violate certain due process obligations. Another is whether platforms have to enforce their rules in ways that are fair to all sides of the political spectrum. Discussions of 230 reform often entail navigating the legal nuances and partisan gripes about the much-maligned law. But the good news is that the Justice Department’s thoughtful policy recommendations in these two areas can actually be addressed through legislative avenues other than Section 230 reform.

Legislation introduced this week by Sens. Brian Schatz and John Thune shows how these reforms can be accomplished through free-standing legislation. The bipartisan Platform Accountability and Consumer Transparency (PACT) Act would mandate transparency and due process requirements for social media companies to be enforced by the Federal Trade Commission.

Should Platforms Have Due Process Obligations?

Of course, platforms can set their own rules, but supposing a user plays by the rules, would the platform be justified in removing him or her anyway? Are there any limits to arbitrary and capricious content moderation practices by platforms? What are the standards for unjustified removal?

The Justice Department is not alone in thinking about these questions. The European Commission is proposing a new Digital Services Act, which seeks among other things to impose rules on online platforms for “more effective redress and protection against unjustified removal for legitimate content … online.”

The Justice Department’s response to this question is essentially that in their removal decisions “platforms must rely on—and abide by—their terms of service.” The department seeks to impose this obligation by saying that if a platform takes action against a post without grounding the action in its terms of service, the company can be construed as not acting in good faith, and so will lose its Section 230 immunity for removals. Stripped of its packaging as reform of Section 230, the Justice Department is proposing that removals are justified only if they are “consistent with” the platform’s “publicly available terms of service or use that state plainly and with particularity the criteria the platform will employ in its content-moderation practices” and the platform provides the user whose content has been removed with “a timely notice explaining with particularity the factual basis” for the removal.

This is actually helpful. It would be valuable as well for any enacting legislation to clarify the “consistency” language to make impossible any reading of the new legislation that holds that terms of service cannot evolve in the face of novel examples of objectionable content. In this vein, the legislation should also make clear that platforms can react quickly and take down posts without providing notifications and explanations in emergency situations such as the broadcast of the Christchurch massacre in 2019. But the general idea that online platforms must adhere to a body of reasonably specific public rules seems essential to any notion of due process in content moderation. Legislation should even go one step further, mandating disclosures from the platforms about their enforcement techniques.

Similarly, the requirement for notice and an explanation for a removal decision is a worthwhile due process requirement that’s easy to get behind. Any rule requiring notice and explanations should also mandate that platforms notify users about complaint procedures, both for users whose content has been taken down and for users who have complained about objectionable content. But again, the Justice Department is on the right track.

The proposal comes up short, however, on the question of enforcement. Essentially, the Justice Department’s proposal is that users should enforce these due process protections themselves through court action. First, users would need to convince a court that a takedown had not been “consistent” with a social media company’s public standards, which would be hard enough. A court’s ruling that a platform acted inconsistently with its standards would mean that the social media company would lose its immunity from liability for removing the material because it had not acted in good faith. Then the users could argue that the removal created some other cause for action. The Justice Department proposal does nothing to give users a basis for a wrongful removal claim. It turns creative attorneys loose to find these underlying carriage rights somewhere in current law, perhaps in contract law or consumer protection law or the Constitution.

The report is confined to this enforcement mechanism because it presents itself as a Section 230 reform. Section 230 is, after all, about liability protections. The report seeks to make more precise existing conditionalities on Section 230’s liability shield by making it such that platforms enjoy immunity from suits stemming from removal decisions only if they provide these due process protections. If they don’t, then they can be sued for wrongful removal. But the department’s proposal does not create a carriage right, and it does not clarify where in current law this right of carriage exists.

Merely removing immunity from liability, of course, does not create liability if none exists to begin with. The Justice Department recognizes this by saying that its proposed removal of the de facto blanket immunity in existing Section 230 does not “itself impose liability for content moderation decisions.” But it does not attempt to create new civil or criminal liability for platforms that fail to provide these due process protections.

The better course of action is to impose transparency rules not through a new cause of action—which would lead to endless, destabilizing litigation—but through free-standing legislation that mandates due process and transparency, and gives an enforcement role, rule-making authority and ongoing supervisory responsibilities to a regulatory agency such as the Federal Trade Commission. The Federal Trade Commission has extensive experience in taking action against unfair and deceptive trade practices under its Section 5 authority. With congressional authorization, it could apply this expertise to the new realm of due process measures for content moderation decisions of online platforms.

Should Platforms Be Fair?

The second, related issue the Justice Department’s report addresses thoughtfully is whether platforms have an obligation to be fair to all political perspectives in the enforcement of their rules. The department’s report acknowledges “concerns that large platforms discriminate against particular viewpoints in enforcing their content moderation policies.” These concerns have come primarily from conservative activists, but progressive groups such as Black Lives Matter have also complained about discriminatory removals of their material from social media platforms.

Republican Rep. Josh Hawley wants to hold platforms accountable for what he sees as political bias against conservatives. He has introduced legislation to empower the Federal Trade Commission to certify that a social media platform “does not moderate information provided by other information content providers in a manner that is biased against a political party, political candidate, or political viewpoint.” Hawley wants to make Section 230 immunity contingent on obtaining such a certification of fairness.

The Justice Department wisely avoids this linkage of Section 230 immunity to a fairness requirement. Instead its suggestion to address this concern of political bias is essentially to mandate “disclosure of enforcement data” by platforms. The availability of “robust enforcement data would enable policymakers and the public to evaluate whether platforms are enforcing content moderation policies even-handedly across different political viewpoints and communities.” Complaints about bias in enforcement are so far just anecdotes. And given the scale at which global platforms operate, examples of apparent bias on all sides of the political spectrum easily can be found, and exaggerated. If government requirements force companies to disclose enforcement data, it opens the possibility to studies, ideally by vetted experts. The Justice Department notes that these studies might “reveal that claims of bias are well-founded” (although, of course, they could also be revealed to be bunk) and so “inform consumer choices or policy solutions.”

In support of this recommendation for disclosure of enforcement data to assess political bias, and to link it to its Section 230-reform agenda, the Justice Department argues that “true diversity of political discourse” was among the goals of Section 230. But this is a misreading, and unnecessary. Political diversity on the internet is a factual finding providing background for Section 230’s requirements, not a policy goal. In Section 230(a)(3), Congress finds that online platforms “offer a forum for true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.” But diversity of political discourse is not among the law’s policy goals, which are listed in Section 230(b), a different section of the law. And even if it were a policy goal, instead of a factual description of the internet in 1996, the phrasing does not suggest that each online platform has to offer this diversity of political discourse any more than each platform has to offer all the “unique opportunities for cultural development” or all the “myriad avenues for intellectual activity.” Not only that, political diversity within a given platform wouldn’t even necessarily be a requirement to achieve that aim of diversity. Diversity of political discourse could be supplied just as easily by a diversity of partisan platforms as by requiring each platform to be politically diverse. A more reasonable interpretation of the law’s diversity finding, if one is needed, is that the law removes a possible liability obstacle (liability for moderating content) that would prevent an online platform from moderating its system so as to provide a diversity of views if it wants.

The Justice Department’s report cites the Enigma Software case in support of its thesis that Section 230 has a diversity goal. But that case from the U.S. Court of Appeals for the Ninth Circuit stands for the narrow idea that a company cannot hide an anti-competitive act or practice behind the Section 230 shield. The ruling certainly does not endorse the idea that a goal of Section 230 is to further diversity or impartiality on a platform-by-platform basis.

Section 230 simply does not address the larger question of political diversity and platform impartiality. Moreover, despite the report’s assertion that the larger online platforms “effectively own and operate digital public squares,” which would make them a public forum subject to constitutional constraints on government censorship, the Constitution provides no basis for an access requirement for social media companies. Under today’s First Amendment jurisprudence, privately operated platforms are not public spaces required to accept legal speech from all comers. In Prager University, the U.S. Court of Appeals for the Ninth Circuit said as clearly as can be said that a social media platform like YouTube, “despite [its] ubiquity and its role as a public-facing platform, … remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment.”

Indeed, in a May 2020 court filing in defense of the constitutionality of Section 230, the Justice Department itself argued that individuals have no First Amendment right of access to a privately operated online platform like YouTube. So, “the liability protection Section 230(c) affords to YouTube likewise does not implicate the First Amendment.”

Nevertheless, while impartiality might not be required by the Constitution, the notion that social media companies should be fair to all sides and reflect the diversity of opinion in the national and local communities in which they operate is an attractive policy ideal. Such an aspiration seems to embody the free expression goals of the First Amendment and the principle of international law that people should have the freedom “to seek, receive and impart information and ideas through any media ….” This ideal of impartiality also mirrors the moral message of the Supreme Court’s Red Lion decision upholding the fairness doctrine for broadcasters and the United Kingdom’s requirement that broadcasters maintain “due impartiality” in their programming.

But a fairness requirement would have its own difficulties under the First Amendment, which courts increasingly interpret as prioritizing the speech rights of media owners, rather than viewers and users. The Federal Communications Commission repealed the broadcasting fairness doctrine more than 30 years ago in part out of concern that its broadcast spectrum scarcity rationale for fairness mandates no longer held up, since there were so many alternatives to broadcasting for viewers and listeners. This rationale held that the requirement for broadcast fairness passed First Amendment scrutiny because the technological limits imposed on the number of broadcasters in a local area meant that there were too few broadcast alternatives for listeners and viewers to obtain access to alternative views. The fairness doctrine required each broadcaster to provide those alternative perspectives. Moreover, outside broadcasting, current U.S. jurisprudence frowns on a government mandate for access and fairness. The 1974 Tornillo case rejected a state law requiring access to the newspaper; the court rejected the Florida law as compelled speech, an infringement on the expressive rights of newspaper editors.

Rather than confront these thorny constitutional and policy issues, the Justice Department report wisely calls for more enforcement data to inform the policy discussion. We simply don’t know whether the large platforms are biased in their enforcement decisions, and so we don’t know whether there is a deficit of balance that policy should consider fixing. The way forward is transparency, not a fairness mandate.

Transparency Legislation Is the Way to Go

In his sweeping “Case for the Digital Platform Act,” Harold Feld at Public Knowledge decried the “enormously destructive distraction” of the debate over Section 230. The Justice Department’s confused attempt to insert reasonable policy recommendations into a debate on Section 230 reform only reinforces Feld’s recommendation that “we stop arguing about Section 230 and figure out what sort of content moderation regime works.” If necessary, he urges, free-standing content moderation legislation could begin with “the following introductory words: ‘Without regard to Section 230 ….’”

A year ago, I wrote in a working paper for the Georgetown Institute for Technology Law and Policy recommending legislation embodying a consumer protection approach to content moderation. This framework would impose disclosure, accountability and transparency duties on online companies to be enforced by the Federal Trade Commission under its unfairness and deception authority. In a white paper on content moderation transparency for the Transatlantic Working Group on Content Moderation and Freedom of Expression, I repeated that call and argued for transparency regulation supervised by a regulator empowered to enforce disclosures of an online company’s content rules, enforcement procedures, complaint processes, terms of reference of related algorithms, and data for qualified researchers to conduct independent assessments. The group’s final report reflected these transparency recommendations.

The legislation introduced by Schatz and Thune takes a step in this direction of transparency and due process. Section 5 of their bill would mandate that the larger online platforms have in place public acceptable-use policies and complaint systems. The bill introduces requirements for these platforms to process complaints within 14 days, and to provide users whose content is removed with notice, an explanation of the takedown rationale and an opportunity to appeal. Per the bill, platforms must also publish quarterly transparency reports. The bill proposes that enforcement of these provisions would be by the Federal Trade Commission. The introduced bill also reforms Section 230 in a separate section, but the transparency requirements are free-standing.

Congress should act now to pass free-standing legislation to require transparency and accountability for online companies, including disclosure of enforcement data to allow for fairness assessments. In the meantime, Congress should keep the Section 230 regime in place to provide legal certainty for platforms and users.


Mark MacCarthy is nonresident senior fellow at the Brookings Institution, senior fellow at the Institute for Technology Law and Policy at Georgetown Law and adjunct professor in Georgetown’s Communication, Culture & Technology Program.

Subscribe to Lawfare