Published by The Lawfare Institute
in Cooperation With
The attorney general is holding a workshop on Section 230 of the Communications Decency Act, asking whether the law can be improved. Section 230 does need work, though there’s plenty of room for debate about exactly how to fix it. These are my mostly tentative and entirely personal thoughts on the question the attorney general has asked.
Section 230 gives digital platforms two immunities—one for publishing users’ speech and one for censoring users’ speech. The second is the bigger problem. Here, I’ll examine them one by one.
Immunity for What Users Say and Do Online
When Section 230 was adopted in 1996, it would have been impossible for a service like AOL to monitor its users in a wholly effective way. AOL couldn’t afford to hire tens of thousands of people to police what was said in its chatrooms, and the easy digital connection it offered was so magical that no one wanted the service to be saddled with such costs. Section 230, which granted platforms broad immunity for third-party content published on their services, was an easy sell.
A lot has changed since then. Facebook and other major technology platforms have, in fact, already hired tens of thousands of people to police what is said on their platforms. Combined with artificial intelligence (AI), content fingerprinting and more, these human monitors work with considerable success to stamp out certain kinds of speech. And although none of these efforts is foolproof, preventing the worst online abuses has become part of what users expect from social media. The sweeping immunity Congress granted in Section 230 is as dated as the Macarena, another hit from 1996 whose appeal seems inexplicable today. Today, jurisdictions as similar to the United States as the United Kingdom and the European Union have abandoned such broad grants of immunity, making it clear they will severely punish any platform that fails to censor its users promptly.
That doesn’t mean the U.S. should follow the same path. Americans don’t need a special, harsher form of liability for big tech companies. But why are we still giving them blanket immunity from ordinary tort liability for the acts of third parties? In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption? I’ve written on this recently and won’t repeat what I said there, except to make one fundamental point.
Immunity from tort liability is a subsidy, one that government often gives to nascent industries that have captured the nation’s imagination. But once these industries grow big, the harm they can cause grows as well—and that immunity has to be justified anew. In the case of warrant-proof encryption, the current justifications are thin. Section 230 allows tech companies to capture all the profits to be made from encrypting their services while exempting these companies from the costs they are imposing on underfunded state and local police forces and victims of crime.
That is not how U.S. tort law usually works. Typically, courts impose liability on the party that is in the best position to minimize the harm a new product can cause. Here, that’s the company that designs and markets an encryption system with predictable impact on victims of crime. Many observers believe that the security value of unbreakable encryption outweighs the cost to crime victims and law enforcement. Maybe so. But why leave the weighing of those costs to the blunt force and posturing of political debate? Why not decentralize and privatize that debate by putting the costs of encryption on the same company that is reaping its benefits? If the benefits outweigh the costs, the company can use its profits to insure itself and victims of crime against those costs. Or it can seek creative technical solutions that maximize security without protecting criminals—solutions that will never emerge from a political debate. Either way, imposing tort liability makes this a private decision that companies can make with few externalities, and the company that does the best job will end up with the most net revenue. That’s the way tort law usually works, and it’s hard to see why the U.S. shouldn’t take the same tack for encryption.
Immunity for Censoring Users
The harder and more urgent Section 230 problem is what to do about Silicon Valley’s newfound enthusiasm for censoring users whose views it disapproves of. I confess to being a conservative, whatever that means these days, and I have little doubt that social media content mediation rules are biased against conservative speech. This is hard to prove, of course, in part because social media has a host of ways to disadvantage speakers who are unpopular in the Valley. Their posts can be quarantined so that only the speaker and a few persistent followers ever see them but none knows that distribution has been suppressed. Or content can be demonetized, so that speakers unpopular within the Valley, even those with large followings, cannot use ad funding to expand their reach. Or facially neutral rules, such as prohibitions on doxing or encouraging harassment, are applied with maximum force only to the unpopular. Combined with the utterly opaque mechanisms for appeal that the Valley has embraced, these tools allow even one or two low-level but highly motivated content moderators to sabotage their target’s speech.
Artificial intelligence won’t solve this problem. It is likely to make it worse. AI is famous for imitating the biases of the decision-makers it learns from—and for then being conveniently incapable of explaining how it arrived at its own decisions. No conservative should have much faith in a machine that learns its content moderation lessons from current practice in Silicon Valley.
Foreign Government Interference
European governments, unbound by the First Amendment, have not been shy about telling Silicon Valley to suppress speech they dislike—which includes true facts about people who claim a right to be forgotten, or charges that a politician belongs to a fascist party, or what these governments consider hate speech. Indeed, much of the Valley has already surrendered, agreeing to use their terms of service to enforce Europe’s sweeping view of hate speech—under which the president’s tweets and the attorney general’s speeches could probably be banned today.
Europe is not alone in its determination to limit what Americans can say and read. The Chinese company Baidu has argued successfully that it has a First Amendment right to return nothing but sunny tourist pictures when Americans search for “Tiananmen Square June 1989.” Today, any government but the United States is free to order a U.S. company to suppress the speech of Americans the government doesn’t like.
In the long run, it is dangerous for American democracy to give highly influential social media firms a blanket immunity when they bow to foreign government pressure and suppress the speech of Americans. The U.S. needs to armor itself against such tactics, not facilitate them.
This isn’t the first time the U.S. has faced a disruptive new technology that changed the way Americans talked to each other. The rise of broadcasting a hundred years ago was at least as transformational, and as threatening to the political order, as social media today. It helped Hitler and Mussolini achieve power and led to the rise of Father Coughlin—not to mention that of Franklin Delano Roosevelt.
American politicians worried that radio and television owners could sway popular opinion in unpredictable or irresponsible ways. They responded with a remarkable barrage of new regulation—all designed to ensure that wealthy owners of the disruptive technology did not use it to unduly distort the national dialogue. Broadcasters were required to get government licenses, not once but over and over again. Foreign interests were denied the right to own stations or networks. A “fairness” doctrine required that broadcasters present issues in an honest, equitable and balanced way. Opposing candidates for office had to be given equal air time, and political ads were to be aired at the lowest commercial rate. Certain words (at least seven) could not be said on the radio.
This entire edifice of regulation has acquired a disreputable air in elite circles, and some of it has been repealed. Frankly, though, it don’t look so bad compared to having a billionaire tech bro (or his underpaid contract workers) decide that carpenters communicating with friends in Sioux Falls are forbidden to “deadname” Chelsea Manning or to complain about Congress’s failure to subpoena ███████████.
(Irony alert: You may have guessed that Lawfare won’t let me use the alleged whistleblower’s name here. I think that’s wrong. The identity isn’t classified. The law prevents government employees from identifying the person, not those outside government. And the alleged whistleblower is no more in danger than the many other government employees who testified before the House in public during impeachment proceedings. Not to use the name is security theater. That said, unlike some social media platforms, which would have simply memory-holed my post, Lawfare has allowed me to publish this, and even to include this mini-rant.)
The sweeping broadcast regulatory regime that reached its peak in the 1950s was designed to prevent a few rich people from using technology to seize control of the national conversation, and it worked. The regulatory elements all pretty much passed constitutional muster, and the worst that can be said about them today is that they made public discourse mushy and bland because broadcasters were cautious about contradicting views held by a substantial part of the American public.
Viewed from 2020, that doesn’t sound half bad. The country might be better off, and less divided, if social media platforms were more cautious today about suppressing views held by a substantial part of the American public.
Whether all these rules would survive contemporary First Amendment review is hard to know. But government action to protect the speech of the many from the censorship of the privileged deserves, and gets, more leeway from the courts than the free speech absolutists would have you believe.
That said, regulation has many risks, not least the risk of abuse. Each political party in this divided country ought to ask what the other party would do if given even more power over what can be said online. It’s a reason to look elsewhere for solutions.
Network Effects and Competitive Dominance
Maybe regulation to protect minority views wouldn’t be necessary if there were more competition in social media—if those who don’t like a particular platform’s censorship rules could go elsewhere to express their views.
In practice, they can’t. YouTube dominates video platforms, Facebook dominates social platforms, Amazon dominates online book sales, etc. Thanks to network effects, if you want to spread your views by book, by video, or by social media post, you have to use their platforms and live with their censorship regimes.
It’s hard to say without investigation whether these platforms have violated antitrust laws in acquiring their dominance or in exercising it. But the effect of that dominance on what Americans can say to each other, and thus on political outcomes, should be part of any antitrust review of their impact. Antitrust enforcement often turns on whether a competitive practice causes consumer harm, and suppression of consumer speech has not usually been seen as such a harm. It should be. Suppression of speech Silicon Valley dislikes may well be one way the industry takes monopoly profits in something other than cash. If so, there could hardly be a higher priority for antitrust enforcement, because such a use of monopoly strikes at the heart of American free speech values.
One word of caution: Breaking up dominant platforms in the hope of spurring a competition of ideas won’t work if the result is to turn the market over to Chinese companies that already have a similar scale—and even less interest in fostering robust debate online. If the goal is to spur competition in social media, Americans need to make sure we aren’t trading Silicon Valley censorship for the Chinese brand.
Transparency is everyone’s favorite first step for addressing the reality and the perception of bias in content moderation. Surely if the rules were clearer, if the bans and demonetizations could be challenged, if inconsistencies could be forced into the light and corrected, users would be less angry and suspicious and the companies would behave more fairly. I tend to agree with that sentiment, but we shouldn’t kid ourselves. If the rules are made public, if the procedures are made more open, the cost will be enormous.
And not just in money. All of the rules and all of the procedures can be gamed, and the more transparent they are, the more effective the gaming Speakers with bad intent will go to the very edge of the rules; they will try to swamp the procedures. And ideologues among the content moderators will still have room to seize on technicalities to nuke unpopular speakers. Transparency may well be a good idea, but its flaws are going to be painful to behold if that’s the direction any effort to discipline Section 230 takes.
What Is To Be Done?
I don’t have much certainty to offer. But if I were dealing with the Section 230 speech suppression immunity today, I’d start with something like the following:
First, treat speech suppression as an antitrust problem, asking what can be done to create more competition, especially ideological and speech competition, among social media platforms. Maybe breakups would work, although network effects are remarkably resilient. Maybe there are ways antitrust law can be used to regulate monopolistic suppression of speech. In that regard, the most promising measures probably would involve requiring further transparency and procedural fairness from content moderation, perhaps backed up by governmental subpoenas to investigate speech suppression accusations.
Second, surely everyone can agree that foreign governments and billionaires shouldn’t play a role in deciding what Americans can say to each other. The United States needs to bar foreign ownership of social media platforms that are capable of playing a large role in U.S. political dialogue. The government should force into the light any foreign government content mandates that affect Americans such as orders that social media platforms remove content that another nation considers defamatory or that includes a map with borders the other country doesn’t recognize. Social media companies carrying out such mandates should be required to disclose that fact, perhaps by requiring them to file under the Foreign Agent Registration Act. And the U.S. should sanction the nations that try to dictate what Americans can read and say.
And finally, here’s a no-brainer. If nothing else, it’s clear that Section 230 is one of the most controversial laws on the books. It is unlikely to go another five years without being substantially amended. So why in God’s name is the U.S. writing the substance of Section 230 into free trade deals—notably the United States-Mexico-Canada Agreement? Adding Section 230 to a free trade treaty makes the law a kind of low-rent constitutional amendment: If Americans want to change it in future, organized tech lobbies and U.S. trading partners will claim that the U.S. government is violating international law. Why would the U.S. do this to itself? It’s surely time for this administration to take Section 230 out of its standard free-trade negotiating package.
All that said, I confess to more than a little uncertainty about the best way to address social media’s misuse of its Section 230 immunity. I look forward to others’ contributions to the discussion of this complex issue, and I hope to return to it in the future.
I have many friends, colleagues and clients who will likely disagree with much of what I say here. Don’t blame them. These are my views, not those of my clients, my law firm or anyone else.