Congress Cybersecurity & Tech

What Has Congress Been Doing on Section 230?

Anna Vinals Musquera, J. Scott Babwah Brennen
Tuesday, May 27, 2025, 3:00 PM

We’ve updated our tracker of Section 230 reform proposals.

Social media icons on an iPhone screen. (Stacey MacNaught, https://flic.kr/p/Y69SeU; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In recent years, there have been so many proposals to reform Section 230 that it’s impossible to remember which is which. That was the idea behind the Section 230 tracker, which launched in 2021 and which Lawfare began hosting in 2023. Today, we’re excited to announce that we are updating the tracker with a fresh set of reform proposals.

The tracker identifies and categorizes new federal proposals that would revise or abolish Section 230 of the Communications Decency Act. Section 230 shields online platforms from liability for most user-generated content (with limited exceptions). Over time, this once low-profile legal provision has become a focal point in tech policy debates across the aisle. Democratic lawmakers have tended to focus on Section 230’s purported role in enabling harmful content to proliferate online, whereas Republican critics have emphasized concerns that the law provides online platforms with cover to censor (mostly conservative) viewpoints.

Briefly, it seemed like Congress might have lost interest in the statute. In recent years, congressional attention shifted away from Section 230 and toward other technology policy priorities. Lawmakers instead focused on issues including major antitrust initiatives targeting big tech companies, national security concerns surrounding TikTok, the rise of artificial intelligence, and renewed efforts to enact federal data privacy protections.

But after several years of legislators turning their attention elsewhere, interest in revising Section 230 appears to be rebounding. There have already been ten proposals to amend or repeal Section 230 in the first few months of the 119th Congress.  The arrival of a new administration critical of the law’s broad immunity, along with recent court decisions carving back that immunity have combined to put Section 230 back on the legislative agenda.

Across these proposals, we see three key trends.

First, there are indirect reforms trying to work around Section 230. A growing number of bills don’t seek to amend the text of Section 230 directly, but rather to impose legal duties on platforms or increase oversight through compliance obligations, transparency requirements, or commissioned studies. These reforms functionally reshape the liability landscape while leaving the statute itself untouched. For example, the Transparency in Bureaucratic Communications Act requires federal inspectors general to report to Congress on any communications between their agencies and online platforms regarding content moderation practices, thereby enhancing oversight without amending Section 230. This bill, introduced in the 119th Congress, exemplifies how legislation can increase platform accountability through transparency mandates rather than direct changes to Section 230’s text.

A second category of Section 230 reform proposals focuses on narrowing the scope of platform immunity in clearly defined contexts. The original model here is SESTA/FOSTA, legislation passed in 2018 that limited Section 230 protections for content related to sex trafficking. These new proposals include carve-outs for particular types of content—such as paid ads, civil rights violations, or child sexual abuse material—as well as bills that tie immunity to proactive compliance measures. For example, the SAFE TECH Act would remove protections for certain harmful content and allow users to bring civil claims, while the EARN IT Act and STOP CSAM Act would condition immunity on platforms demonstrating robust efforts to combat child exploitation. The recently enacted Take It Down Act and the proposed  Intimate Privacy Protection Act similarly seek to hold platforms liable when they fail to remove or prevent intimate digital abuse. Though they vary in scope, these measures reflect a shared legislative strategy: using targeted limitations on Section 230 as a way to increase accountability for egregious online harms.

Finally, there are the calls to sunset Section 230 entirely.  A forthcoming bipartisan bill led by Senators Lindsey Graham (R-SC) and Dick Durbin (D-IL) proposes to sunset Section 230 on January 1, 2027, unless Congress enacts a replacement framework. Though not yet formally introduced, the legislation is intended as a strategic lever to compel stakeholders—including lawmakers, platforms, and civil society—to negotiate comprehensive reform before the law’s protections expire. Supporters view the sunset provision as a necessary catalyst to break through years of stalled reform, while critics caution that it could introduce significant legal uncertainty and disproportionately burden smaller platforms and users in the absence of a clear successor framework. The proposal marks a departure from more targeted reform strategies, signaling a broader shift in legislative posture toward intermediary liability.

​From bipartisan attempts to sunset Section 230 entirely to more targeted amendments, lawmakers continue to introduce new proposals at a striking pace. This renewed momentum around Section 230 reform underscores how central the debate over online content moderation has become, and it shows no signs of abating in the current Congress.


Anna is an M.S. candidate in Global Security, Conflict, and Cybercrime at New York University and a Graduate Research Assistant at NYU’s Center on Tech Policy. She has more than six years of experience as an attorney specializing in regulatory, privacy, and cybersecurity law, and her work focuses on the intersection of AI, law, and emerging regulation.
J. Scott Babwah Brennen is the director of the Center on Technology Policy at NYU.
}

Subscribe to Lawfare