Published by The Lawfare Institute
in Cooperation With
Add the Federal Communications Commission to the list of government branches and agencies weighing in on tech industry practices. On Aug. 3, FCC Chairman Ajit Pai announced that the agency is inviting the public to comment on the petition for rulemaking filed by the National Telecommunications and Information Administration (NTIA)—an agency within the Department of Commerce—regarding Section 230 of the 1996 Communications Decency Act. The petition followed President Trump’s executive order on “Preventing Online Censorship,” which called on the NTIA to file a petition with the FCC to clarify the scope of Section 230 of the Communications Decency Act—a key piece of legislation that shields the providers and users of online services from liability for third-party content. Section 230 holds that interactive computer services will not be treated as either the publisher or the speaker of information provided by a third-party source, even when the provider of the internet service engages in content moderation of the site, provided that moderation is carried out in good faith.
Notably, the executive order came shortly after Twitter appended a fact-checking label to a presidential tweet for the first time. In the order, the president denounces “censorship” by online platforms and calls for heightened transparency and accountability of social media platforms. Specifically, the order calls for “clarify[ing]” and “limit[ing]” the law’s immunity protections for online platforms in moderating user and third-party content. To meet these ends, the order directs the NTIA to file a petition for rulemaking with the FCC, requesting that the FCC review and clarify the provisions of the law. That petition was filed in early August and asks the FCC to explain that key “good faith” term in Section 230.
In sum, the petition seeks to exploit the ambiguity of the “good faith” phrase in order to exclude any platform content moderation that could be motivated by political bias from the protections of the act, on which all platforms rely. Yet while the proposals in the NTIA petition are framed as promoting transparency and standardization of legal rules, they would have the effect of limiting platforms’ ability to remove objectionable content. And the proposals would also decrease platforms’ agency to self-curate and promote content appearing in line with business needs and users’ expectations.
The legality of the executive order itself is on shaky ground in several respects. Generally speaking, the legislative branch has the sole power to amend, repeal or supplement a statute. For this reason, the executive order may be vulnerable to constitutional challenges on the grounds that the president is attempting to circumvent and encroach on congressional authority. Most importantly, Section 230 does not reference the FCC at all, calling into question whether the agency has the necessary authority to begin with. Though Section 230 is in the Communications Decency Act, which the FCC is broadly regarded as having the authority to implement, the commission would need to demonstrate the necessary delegative authority from Congress to promulgate any rulemaking that would effectively amend a congressional statute. Meanwhile, the Center for Democracy & Technology, a nonprofit advocacy group for internet freedom, has filed a lawsuit to block the order on First Amendment grounds.
For now, however, the rulemaking process continues to move forward. Once a petition for rulemaking is filed with the FCC, it is the FCC’s regular practice to hold an open forum for the public to present its input and comments on the proposed rulemaking. In his invitation calling for public comment, the chairman dismissed widespread criticism calling on the FCC to ignore the NTIA’s petition. Now, the FCC will receive and review feedback from the public on the issue over the course of the 45 days following the Aug. 3 announcement.
The petition closely follows the executive order’s directives, highlighting areas for clarification and potential amendments to Section 230. Echoing the executive order, the NTIA petition takes issue with alleged “anti-conservative bias” in platforms’ content moderation policies. The petition takes particular issue with the alleged bias of the major social media platforms such as Facebook and Twitter, arguing that when an internet service provider “appears to reflect a particular viewpoint,” the platform’s status must shift from “service” to “content” provider given its nature as a “vehicle of expression” in favoring a particular stance on an issue in the public debate. As it currently stands, Section 230 does not distinguish among providers based on so-called viewpoint neutrality. In other words, all internet service providers are treated the same, regardless of any apparent viewpoints. The petition proposes language to pass interpretative regulations of Section 230 that would strip so-called content providers of the law’s liability shield—carving out a new designation of content provider in an attempt to withhold Section 230 immunity from those providers that appear to promote a particular viewpoint. The reform proposals seek to curtail the alleged platform-led “editorializing” practices, by attempting to nullify existing liability exemption for platforms determined to be “not politically neutral.”
As a means to these ends, the petition targets the law’s current Good Samaritan protections for blocking and screening of offensive materials. When Congress passed Section 230 in 1996, it sought to support the growth of the then-nascent internet by reducing the legal uncertainty for online intermediaries that host user-generated content, and to encourage providers to engage in content moderation practices without fear of threat of legal consequences. Consequently, the Good Samaritan provision was intended to allow providers the flexibility to moderate and curate third-party generated content according to their needs, while also leaving room for platforms to remove content without fear of being held immediately liable for harmful material left unfiltered.
This provision appears in § 230(c)(2)(A), which offers immunity to providers from civil liability for any action taken in good faith to restrict or remove material that the provider or its users consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Notably, the NTIA petition calls on the FCC to clarify and refine the “good faith” and “otherwise objectionable” terms in § 230(c)(2)(A)—whereas the specific language giving platforms immunity from civil liability is under § 230(c)(1). Thus, the NTIA petition takes a number of novel legal steps: First, it proposes placing any moderation with so-called political bias outside the statute’s definition of good faith; and second, it invents a connection whereby a failure to act in good faith under § 230c(2)(A) deprives a platform of immunity under § 230(c)(1).
The effort to link §§ 230(c)(1) and (c)(2)(A) together is pivotal to understanding just how far-fetched the NTIA petition is. While (c)(2)(A) is important because it offers an easy way to dispose of obviously nonmeritorious lawsuits over typical platform actions like content filtering, it is less vital to the functioning of the internet than (c)(1): Very few causes of action exist that would hold platforms liable for discrimination even in Section 230’s absence. In contrast, (c)(1) protects platform providers from lawsuits that are far more likely to succeed and could have costly effects, such as secondary liability for defamation. The trouble for the White House is that there is no hook in (c)(1) for FCC rulemaking. Thus, the petition seeks to transform (c)(2)(A) into a regulatory scheme for use by the FCC.
The petition makes no effort to hide its goal of implementing regulatory control over platforms, which have so far regulated themselves. In articulating its recommendations for reform, the NTIA directly and explicitly identifies social media platforms as targets of the proposed reform efforts. The petition advances the view that reform of Section 230 is necessary in light of modern technological trends. It notes concerns that social media platforms are engaging in content-moderating decisions that are “ideologically driven”—at one point citing an instance in which Facebook removed content pertaining to firearms on its site, and arguing that the restriction was “chilling the speech” of the individual who was in support of gun rights.
However, this framing misconstrues both the scope of the First Amendment and the purposes of Section 230. Freedom of speech protects private individuals against state interference. Platforms are private actors, and thus there can be no First Amendment claim against them—and in this position, they are also not able to infringe on another private individual's protected speech rights. Moreover, platforms such as Twitter are at liberty to curate their content not because of exemption from liability under Section 230, but because of the First Amendment rights that they themselves enjoy. Being private actors, they have the right to curate and moderate content due to constitutionally protected rights of freedom of expression and association. Section 230 only immunizes platforms from civil suits against third-party content posted to their sites, even if the platform typically works to screen out such offending material.
Finally, the administration’s focus on “political neutrality” as it relates to Section 230 is misleading. The statute’s language and its legislative history do not suggest political neutrality as a desired end goal. To the contrary, Section 230 was intended to encourage an internet that was both safe and free.
According to the FCC’s timeline, the public comment period will end on Sept. 17. It remains unclear whether the comment period will lead to any binding legal alterations to the scope of Section 230. The executive order has already faced legal challenges, and legal observers and stakeholders have dismissed and often denounced these actions by the administration as political theater.
But as others have observed, the current regulatory action can still harm the free speech rights of platforms and their users in the interim, regardless of whether the actions are legal. Some critics suggest that the executive order and subsequent agency action are targeted efforts to deter social media platforms from engaging in content moderation, such as fact-checking and restricting access to dangerous misinformation, which would reflect negatively on the administration. Indeed, the administration's public attacks on private entities have negatively impacted those companies’ shares on the stock market. And this threat alone—coupled with repeated attacks on key protective legislation—could deter social media platforms from future moderation out of fear of additional retaliation.