What’s in a Name? Quite a Bit, If You’re Talking About Section 230
As Congress decides whether to change the legal underpinnings of the internet, we need a better understanding of why it passed Section 230 of the Communications Decency Act in the first place.
Published by The Lawfare Institute
in Cooperation With
Twenty-six words in the U.S. Code created the legal framework for the internet that we know today. Until a few years ago, few people outside of tech policy circles knew much about those 26 words, which are better known as the key part of Section 230 of the Communications Decency Act. The law provides online platforms such as websites and social media services with broad protection from liability arising from many types of user-generated content.
But amid high-profile failures of companies to block harmful content and allegations that social media services block certain political viewpoints, Section 230 now is constantly in the news. Republicans and Democrats alike are calling on Congress to reconsider the extraordinary protections. As Congress decides whether to change the legal underpinnings of the internet, we need a better understanding of why it passed Section 230 in the first place.
Many misconceptions about Section 230 arise from the informal name by which it is most commonly referred—Section 230 of the Communications Decency Act—which understandably leads many observers to conclude that Section 230 was only about online decency. But that wasn’t the name that it was given initially, and the story of the evolution of the legislation and its name helps to illustrate its complex congressional purpose and some of the pitfalls in recent attempts to interpret what motivated Congress to pass it.
Understanding the law’s history is not as simple as one might expect, thanks to Section 230’s long and twisty route from techno-utopian ideal to Title 47 of the U.S. Code.
Misunderstandings of Section 230’s history already have framed the current debate, including claims that Section 230 applies only to “neutral platforms” and assumptions that Congress passed the statute to censor speech through private companies. In reality, Congress passed Section 230 so that platforms could choose not to be neutral and to moderate content based on the demands of their users (rather than regulators or judges). I spent more than two years researching and writing a book about Section 230, and I found that Congress passed Section 230 to empower consumers and platforms—rather than the government—to develop the rules of the road for the nascent commercial internet.
Members of Congress introduced Section 230 to address an illogical gap in the law that applied to distributors of content created by others. Under a First Amendment and common law rule, which first emerged in obscenity prosecutions of bookstore owners in the 1950s and 1960s, a distributor of someone else’s speech is liable for that speech only if the distributor knew or had reason to know of the particular illegal content. This rule, for instance, prevented newsstands from having to prescreen every newspaper and magazine they sold for fear of obscenity prosecutions and defamation lawsuits.
This common law protection worked pretty well until the early 1990s, when online services emerged, carrying far more content than the average bookstore or newsstand. These early platforms had different approaches for moderating third-party content on their bulletin boards and chat rooms. CompuServe took a relatively hands-off approach, while Prodigy billed itself as a family-friendly alternative, establishing intricate user content policies, hiring moderators to police its sites and using automated software to screen for offensive content.
Yet both CompuServe and Prodigy were sued for defamation arising from third-party content posted on their services. CompuServe convinced a New York federal judge to dismiss the case. The judge concluded that CompuServe was nothing more than an electronic version of a bookstore. Applying the common law rule for distributors, the judge ruled that CompuServe could not be liable because it did not know and had no reason to know of the alleged defamation.
Prodigy, however, was not so fortunate. Because the platform chose to moderate user content, a New York state court judge ruled in 1995, it did not receive the protections afforded to distributors. Rather, the judge held that Prodigy was a publisher and thus faced the same potential liability as the author of the posts.
Taken together, these court rulings meant that a platform might reduce its potential liability by taking a hands-off approach, as CompuServe did. This was particularly concerning to some members of Congress in 1995, as media coverage hyped the problem of “cyberpornography” that was increasingly available to children as internet access proliferated in homes and schools. Some members of Congress worried that Prodigy’s loss would discourage online services from moderating content and blocking access to pornography and other harmful materials.
The Prodigy case prompted discussions between two tech-savvy House members: California Republican Chris Cox and Oregon Democrat Ron Wyden. On June 30, 1995—a little over a month after Prodigy’s courtroom defeat—they introduced the bill that would become Section 230. “Communications Decency Act” was nowhere in the bill’s title or text. Instead, it was titled the “Internet Freedom and Family Empowerment Act.” And that title is a far more accurate description of the bill’s goals.
The bill has two key provisions. The first provision, which comprises what I contend are the 26 words that created the internet as we know it today, is: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that unless a claim falls into one of a few of the bill’s explicit exceptions—such as federal criminal law or intellectual property—a provider or user of an interactive computer service cannot be held liable as the speaker or publisher of information provided by someone else. The effect is this: If you write a defamatory post about me on Facebook, I can sue you, but I can’t sue Facebook. This is where the “Internet Freedom” part of the title comes in. Cox and Wyden recognized that the internet had great commercial potential, and they did not want to bog down these emerging companies with the potential for huge liability. In the initial version of their bill, Cox and Wyden included a provision that explicitly prohibited the Federal Communications Commission (FCC) from “economic or content regulation of the Internet or other interactive computer services,” though this provision was removed from the bill that was signed into law.
The second provision in the Cox-Wyden bill states that providers and users of interactive computer services cannot be liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This is the “Family Empowerment” piece of the bill. By protecting both providers and users of interactive computer services for blocking content, Cox and Wyden hoped to empower parents and other consumers to choose the services that best met their expectations for content moderation. With this provision, Cox and Wyden also wanted to encourage families to use blocking software such as NetNanny, which they viewed as a far better alternative to government regulation of the Internet.
Cox and Wyden included congressional findings and policy statements that explain their two primary goals. First, they touted the need to avoid regulating the internet, writing that it is U.S. policy “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” They wrote that the internet offers “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity” and it has “flourished, to the benefit of all Americans, with a minimum of government regulation.”
Second, Cox and Wyden recognized that interactive computer services “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” The pair wrote of the need to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.”
Many of these findings and policy statements came from a report about online user empowerment organized by the Center for Democracy and Technology, which worked on drafting the Internet Freedom and Family Empowerment Act with Cox, Wyden, and technology companies and civil liberties groups.
The House faced political pressure to pass the Cox-Wyden bill because the Senate was taking a very different approach to the cyberpornography scourge: the Communications Decency Act (CDA). Sen. James Exon introduced the CDA on Feb. 1, 1995, nearly five months before Cox and Wyden introduced their bill. Exon’s bill proposed criminal penalties for the transmission of indecent materials to minors. In June, the Senate added Exon’s bill to the Telecommunications Act, the first overhaul of U.S. telecommunications law in six decades.
Civil liberties groups were furious about the free speech implications of the CDA. So were many House members, including then-Speaker Newt Gingrich, who called the bill “clearly a violation of free speech.”
Still, it was untenable for the House to entirely ignore the online pornography debate, particularly when Congress was revamping the nation’s telecommunications laws. Cox and Wyden’s Internet Freedom and Family Empowerment Act was seen as the primary alternative to the CDA. On Aug. 4, 1995, the House considered the bill as one of many amendments to its version of the Telecommunications Act.
Much of the debate about the telecom act focused on landline telecommunications issues such as competition between local and long-distance phone companies. Though cyberpornography was a hot-button issue, the internet was a bit of a sideshow during debate over the bill. The Cox-Wyden bill faced little pushback when it came up for discussion on the House floor. Cox urged his colleagues to avoid regulating the new technology. “If we regulate the Internet at the FCC, that will freeze or at least slow down technology,” he said. “It will threaten the future of the Internet. That is why it is so important that we not have a Federal Computer Commission do that.”
In a 420-4 vote, the House added the Cox-Wyden amendment to its telecom act. But Exon’s CDA was in the Senate’s version of the act. In a compromise, the conference committee included both the Cox-Wyden amendment and a version of Exon’s amendment in the final Telecommunications Act. The Cox-Wyden bill was mostly unchanged, though the conference committee deleted the prohibition on FCC regulation of the internet and added a sentence explicitly preempting any state or local laws that conflict with Section 230.
Many of the misunderstandings of Section 230’s intentions are the unfortunate consequence of this attempt at bicameral compromise. Because both the Cox-Wyden and Exon provisions dealt with online decency (though in very different ways), they were placed in Title V of the Telecommunications Act of 1996, formally titled “Obscenity and Violence” but given the short title of “Communications Decency Act of 1996.” The Cox-Wyden provisions appear in Section 509 of that title, under the heading “Online Family Empowerment.” The law creates a new section of the Communications Act of 1934—Section 230—which is titled “Protection for Private Blocking and Screening of Offensive Material” and contains the updated Cox-Wyden bill. The two immunity provisions appear under a subheading titled “Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material.”
President Clinton signed the Telecommunications Act of 1996 on Feb. 8, 1996. The same day, civil liberties groups brought a First Amendment challenge to the CDA, and the next year the Supreme Court struck down much of the CDA as unconstitutional. The target of the wrath of civil liberties groups and the admonition of the Supreme Court was the same: Exon’s language restricting the transmission of online content.
After the Court’s ruling, most of the Communications Decency Act that Exon drafted was no longer law, but Cox and Wyden’s Section 230 was. Herein lies the source of the confusion surrounding the name used to refer to Section 230. Because Cox and Wyden’s legislation was placed in the same title of the Telecommunications Act of 1996 as Exon’s amendment, courts soon began referring to it as “Section 230 of the Communications Decency Act,” though it technically should be called “Section 230 of the Communications Act of 1934, as amended” or “Section 509 of the Telecommunications Act of 1996.” But the best descriptor of its two primary goals remains its initial title: the Internet Freedom and Family Empowerment Act.
Because of its most popular title, “Section 230 of the Communications Decency Act” is often conflated with the very different approach that Exon took. For instance, in a recent Lawfare piece, Jed Rubenfeld argued that Section 230’s protections for moderation render some large platforms into state actors that are subject to the First Amendment’s restrictions. Rubenfeld relied in part on the CDA’s title, writing that the Communications Decency Act “was so named because it purported to criminalize ‘indecent’ and ‘offensive’ online content” and that “the legislation’s primary purpose was not in the least neutral. Congress’s express goal was to extirpate offensive speech from the internet.” He quotes a statement from Exon to support his argument, and he writes that “Section 230 cannot be extricated from these statutory purposes, of which it was originally a part.”
I believe that Section 230 can and should be extricated from Exon’s purposes, at least partly. Section 230 was only a part of the CDA to the extent that it was placed under the same title of the final telecommunications bill. To be sure, we cannot ignore the fact that Congress did not pass the Cox-Wyden proposal independently but, rather, included it in the much larger telecommunications reform statute. Exon’s provisions should be afforded as much interpretive weight for Section 230 as any other part of the massive telecommunications bill. Yes, it is important to consider the overall context in which Congress ultimately passed Section 230, and it is impossible to ignore the fact that Exon’s CDA was included in the bill as part of the conference committee’s compromise. But Section 230 was proposed as the alternative to the CDA’s unconstitutional and onerous regulations on speech, which ultimately were struck down. By empowering parents and other users—rather than a “Federal Computer Commission”—Section 230 actually sought to protect speech from regulation.
It is true that Section 230 provides platforms with the breathing room to moderate (or not moderate) third-party content. Whether that renders platforms state actors is a question that Rubenfeld and Alan Rozenshtein have thoroughly and thoughtfully debated. But it is absolutely critical to remember that both immunity provisions of Section 230 explicitly apply not only to providers of interactive computer services but also to users. This was not a typo; it reflects the very clear preference of the drafters: user empowerment.
User empowerment recognizes that some platforms may moderate more than others and users will decide which to gravitate toward. This framing ultimately favors the free market over government regulation. The 1995 Center for Democracy and Technology report that provided the foundations for Section 230 emphasized the need to allow diverse approaches: “To be sure, some system operators will want to offer services that prescreen content. However, if all systems were forced to do so, the usefulness of digital media as communication and information dissemination systems would be drastically limited. Where possible, we must avoid legal structures that force those who merely carry messages to screen their content.”
Section 230 provided the legal framework for the open internet that we know today: the good, the bad and everything in between. As Congress determines whether to overhaul this fundamental law, it must have a clear and complete understanding of Section 230’s history and purpose.
The views expressed in this piece are only his, and do not reflect the views of the Naval Academy, Department of Navy, or Defense Department.