Cybersecurity & Tech

When Platforms Do the State’s Bidding, Who Is Accountable? Not the Government, Says Israel’s Supreme Court

Daphne Keller
Monday, February 7, 2022, 1:01 PM

The Adalah ruling highlights an unresolved tension between widely held goals for restricting online content and the constitutionally permissible means available to achieve them.

The Supreme Court of Israel. (Official photo by the Israeli Government Press Office)

Published by The Lawfare Institute
in Cooperation With
Brookings

Around the world, law enforcement bodies known as Internet Referral Units (or IRUs) are asking platforms like Facebook to delete posts, videos, photos and comments posted by their users. Platforms are complying, citing their own discretionary terms of service as the basis for their actions. Users are not being informed of governments’ involvement. Courts have little or no role.

The Israeli Supreme Court recently rejected a challenge to Israel’s version of this system, in a case called Adalah v. Cyber Unit. The ruling allows Israel’s IRU, known as the Cyber Unit, to continue asking platforms to remove tens of thousands of user posts annually. At first glance, that outcome seems inconsistent with the U.S. approach to speech rights under the First Amendment. In the seminal 1963 Bantam Books case, the U.S. Supreme Court reviewed largely similar facts. Rhode Island’s “Commission to Encourage Morality in Youth” regularly sent intimidating notices to book distributors identifying potentially unlawful literature, which led the distributors to withdraw those books from circulation. The court held this behavior unconstitutional. As it explained, “the Commission is not a judicial body and its decisions to list particular publications as objectionable do not follow judicial determinations …. We have tolerated such a system only where it operated under judicial superintendence and assured an almost immediate judicial determination of the validity of the restraint.” This central role of courts is a cornerstone of U.S. First Amendment law under the prior restraint doctrine. 

What distinguishes Adalah and general IRU operations from the Bantam Books scenario is that platforms’ involvement is nominally voluntary. The Israeli Cyber Unit did not threaten platforms with any adverse consequence for noncompliance. This apparent lack of coercion was important to the court. As both Derek Bambauer and Genevieve Lakier have written, it would make challenging this kind of behavior—which legal academics call “jawboning”—complicated under U.S. law, as well.

Lawfare published a great post by Tomer Shadmy and Yuval Shany explaining the Adalah ruling when it first came out in Hebrew last year. (Lawfare also has an illuminating podcast with part of the legal team that challenged the Cyber Unit's work). I won’t repeat that summary. I will instead unpack the parts of the case that strike me as important for the global debate about IRUs, and for U.S. constitutional analysis. 

I will begin by discussing the key factual questions that the court considered unresolved, and then dig deeper into underlying issues about users’ rights and online governance. I come to these questions with some personal experience. Ten years ago, as the legal lead for Google’s web search service, I saw the prehistoric ancestors of today’s government takedown requests. Those periodic demands were nothing like the streamlined, high-volume operations of today’s IRUs. But the ambiguity about the requesting agencies’ true legal authority, and the sense that failure to cooperate would likely bring repercussions, probably hasn’t changed that much. 

What has changed is the widespread sense of a crisis in online speech. With it has come an increasing, if tacit, acceptance of solutions that bypass the usual mechanisms of government, and instead depend on platforms’ discretionary powers as private actors unconstrained by the Bill of Rights and its international analogs. Faced by dangerous speech at internet scale, it is tempting to throw up our hands and abandon constitutional governance. The Adalah court paraphrased the Israeli government making essentially this argument. Defending the IRU’s operations, it asserted that: 

the world now agrees that this is the only effective means for the removal of violating publications from the internet, and that otherwise a situation of total anarchy would emerge, in which everyone would do as he sees fit, while violating local criminal law.

Some parts of the Adalah court’s opinion seem motivated by this logic. At the same time, the ruling highlights what is lost by turning toward these “voluntary” mechanisms: legislation as a source of rules, and courts as a source of adjudication, for online speech. 

The Case

Israel’s Cyber Unit is modeled loosely on law enforcement bodies established in the United Kingdom, France and elsewhere over the past half dozen years. These IRUs vary somewhat, but they share a reliance on platform terms of service (rather than law) as the basis for removal, and on police, security agencies, or prosecutors (rather than courts) as government assessors of particular posts. European IRUs have been criticized extensively by human rights groups for subverting due process, expression and information rights, and the rule of law. These groups scored a major victory when the European Parliament rejected a provision that would have enshrined IRU takedown operations in the EU’s Terrorist Content Regulation. 

The Israeli IRU uses two separate processes to get online content removed from platforms. The first, which is not at issue in Adalah, is the so-called statutory track. It is expressly authorized by legislation but covers only a limited set of claims. In it, the IRU follows what the court calls “the classic view of criminal enforcement in which a prosecutor… applies to a District Court judge … for an order” requiring platforms to remove content. The statutory track, in other words, looks much more like the permissible system of speech regulation under Bantam Books.

Adalah is concerned with the second track, which the court calls the voluntary track. Here, the IRU identifies content it believes to be illegal and brings removal requests directly to private platforms, search engines and hosting companies. The requests cite the platforms’ private terms of service as the basis for removal. Some platforms treat the IRU as a “trusted flagger,” prioritizing its requests and perhaps subjecting them to less scrutiny. Israel’s IRU mostly reports on content it believes is linked to terrorism—including, plaintiffs in the case alleged, lawful dissent by Palestinians and others. But it also asks platforms to erase other kinds of content, including harassment of elected officials and civil servants. 

In 2018, 99 percent of the IRU’s reports concerned terrorist content and 87 percent of its reports were sent to Facebook. In 2019, it submitted 19,606 reports—each of which might list “tens or even hundreds” of individual user posts, comments, memes, songs or videos. Assuming the lower-end figure of 10 items per report, that would be some 196,000 items reported that year. In 10 percent of those cases, or roughly 20,000 times, platforms declined to remove material identified by the IRU. 

Plaintiffs, two Israeli nongovernmental organizations, brought a long list of claims, including that the IRU’s operations violate free expression and due process rights. They also argued that by bypassing courts, the IRU violates the separation of powers and exceeds its authority. Much of the court’s eventual 52-page ruling focused on this question of authority and what formal statutory basis the IRU needs for its actions under Israeli law. Importantly, the court recognized the IRU’s notices to platforms as “government acts,” in need of adequate statutory authorization. The degree of authorization required depended in turn on how the IRU affects users’ rights, because acts that may impinge on those rights need a clearer legislative basis. The court concluded that it couldn’t assess the impact on users’ rights because plaintiffs had presented insufficient evidence about two questions. First, was any protected speech affected? Second, were platforms’ choices to remove that speech coerced by the state, or truly voluntary?

I think the answers to both of these questions are clearly yes. But I also sympathize with the court, which seems palpably relieved at finding an evidentiary basis to dodge the deeper questions this case raises about online speech and governance in constitutional democracies.

The Court’s Factual Questions About Suppression of Lawful Speech 

Is Any Lawful Speech Actually Suppressed?

The Adalah court placed great weight on the lack of proof that the IRU’s actions had harmed any particular lawful speech. Such a showing would be difficult, since platforms typically don’t tell users when the IRU was involved in a removal decision. The IRU itself seemingly doesn’t keep track, either. (The court took it to task for this, appropriately.) So no one knows what posts the IRU had removed, who was affected, or whether the speech was legal. 

We do know, though, that the IRU deemed tens or hundreds of thousands of posts illegal without consulting a court. Getting all those legal judgment calls right would be remarkable. We also know that other IRUs have requested removal of online material ranging from Grateful Dead recordings to scholarly articles. And we know that the platforms thought Israel’s IRU was wrong roughly 20,000 times in a single year. 

Formally, when platforms pushed back on the IRU notices, they were just saying, “This doesn’t violate our TOS.” Perhaps as a matter of evidence law, Israeli courts could not read anything else into these platform decisions. Realistically, it is hard to imagine platforms resisting government requests unless they also believed the posts were legal speech. Knowing about illegal content and failing to take it down exposes platforms to serious liability risk in most countries (including the U.S. for some claims). Shadmy and Shany write that under Israeli law, failing to remove put platforms at “concrete legal risk for civil or even criminal liability.” By pushing back on these notices, platforms were effectively saying that the content identified by the IRU was legal.

I assume the court stood on solid ground, under Israeli law, in calling for more concrete evidence. In my opinion, it should have rested its ruling entirely on this dry procedural ground. Instead, it posits its own unsupported set of facts. First, the court suggests that the affected accounts were bots, and not real people. Presumably that is true of some, but assuming that no humans were affected by such a vast speech removal campaign seems highly speculative. Second, the court hypothesizes that the affected speakers are not citizens or residents of Israel. That’s a weird assumption, factually. It is also not clear that it matters, legally. Shadmy and Shany tell us that legal questions about protections for such speakers—particularly if the court is referring to people in the Palestinian territories—are far from settled. In any case, the IRU’s actions affect the ability of people inside Israel to access and read lawful information.

The court does some further hand-waving about jurisdiction, which I think makes its analysis less convincing. The IRU’s voluntary enforcement track is necessary, it says, because other countries have divergent laws about speech and platform liability. But this national variation can’t be enough to preclude normal court proceedings. If it were, the world would have virtually no case law about online speech. The ruling also suggests courts have “limited international judicial authority” over speech originating abroad, leaving the voluntary enforcement track as the only way to reach “‘bad actors’ in cyberspace.” This is an odd thing to say when rejecting plaintiffs’ claims that the Israeli IRU exceeded its authority (unless the court is truly saying “courts have no power, so unconstrained acts by other branches of government are our only recourse,” which is even more troubling). It also doesn’t seem legally relevant. Whether or not Israeli courts have jurisdiction over foreign users, they almost certainly have jurisdiction over platforms that make content available on their Israel-targeted services. If they didn’t, numerous Israeli cases against American platforms (including one for which I testified in Jerusalem District Court) would have been dismissed.

Are Platforms Being Coerced?

For American lawyers, coercion lies at the heart of the constitutional question about IRUs. If the state actors are just helping platforms do what they want to do anyway, that could be enough to distinguish the case from Bantam Books—particularly given the messiness of subsequent U.S. case law. Bantam Books itself disclaimed any intention to preclude all “private consultation between law enforcement officers and distributors” of speech. 

From the Adalah court’s perspective, this is an evidentiary question. It evinces great frustration that platforms were not joined as parties to the case, to tell the court how they feel when law enforcement agents ask them to take down content. This question about giant corporations’ subjective feelings strikes me as misplaced. Let’s be real about how the terms of service arrived at their present form in the first place: platforms gradually, “voluntarily” added ever more rules and internal enforcement processes, including restrictions affecting lawful speech, in response to government pressure. Countries like the U.K. were particularly public in their demands, but I would be very surprised if Israel’s government were less aggressive in its communications. Platforms may have since internalized these rules or forgotten their history. But courts should not be blind to states’ clear and often public role. 

Beyond platforms’ overall policies, there is the matter of how individual takedown requests play out. The IRU probably identifies a certain number of easy cases: posts that are obviously prohibited, or obviously permitted. The remainder are harder, gray-area judgment calls—which might include descriptions of history that arguably glorify violence, quotations from the Koran that are popularly used by extremists, sharp criticisms of Israeli authorities, and more. It’s in reviewing these particular posts, by particular people, that platforms will most likely feel the weight of state pressure. 

At root, though, I don’t think the legal and evidentiary question about coercion is the right one. If platforms don’t care about keeping particular content online, they may genuinely not feel coerced when the police come knocking. I have heard non-U.S. officials express surprise at one recent major market entrant’s lack of resistance to government requests, for example. For a strictly rational platform, saying no to governments may not be worth the potential costs: upticks in critical attention from police or prosecutors, public tongue-lashings in legislative hearings, regulatory backlash (new content laws, or laws that just so happen to hurt platform interests in areas like tax, competition or privacy), and even arrests (as have happened in Brazil and India), service blockages (Turkey, Russia), or seizures of assets or moneys owed (as recently authorized in Austria). 

How current company employees subjectively feel about IRU takedown requests doesn’t change those requests’ impact on users’ speech. Even asking if the company’s terms of service “really” prohibit certain speech is unhelpful if the terms were drafted to appease the state in the first place, or in the inevitable gray-area cases. Both U.S. and Israeli legal doctrine may be elevating form over function by pretending otherwise. 

Deeper Questions About Public and Private Control Over Online Speech 

At the end of the day, I see no serious question that lawful speech is being suppressed by IRUs, or that platforms are being “coerced” in a sense that should be legally cognizable when users assert their speech rights. That doesn’t mean the case is easy, though. It surfaces, more frankly than is comfortable, unresolved tensions between widely held goals and the constitutionally permissible means available to achieve them. I’ve analyzed these tensions in more depth elsewhere, but they arise with particular acuteness in Adalah

Sympathy for the Government: How Do You Enforce the Law?

Broadly speaking, IRUs have two potential functions that raise serious concerns about users’ rights. One—which is seemingly not at issue in Adalah—is suppressing legal speech. IRUs can achieve this by invoking platforms’ own rules because, as the EU’s counterterrorism coordinator put it, those rules “often go further than national legislation and can therefore help to reduce the amount of radicalising material available online.” The other IRU function—which is squarely at issue in Adalah—is acting quickly and at scale against harmful speech, by bypassing courts. The Adalah court seems to anticipate that constitutional problems with this approach can be laid to rest once the legislature enacts an authorizing statute to legitimate the IRU’s actions. 

What would that legislation look like, though? Suppose, following the model of the IRU’s current statutory track and the “Facebook Bill” that Israel nearly passed in 2018, lawmakers required the IRU to obtain court orders before asking platforms to take down users’ posts. Could Israeli courts really review the IRU’s annual tens or hundreds of thousands of reports? Turning to courts means prioritizing due process and speech rights over speedy resolution of claims, including for content that may incite serious violence—like the social media posts said to have inspired dozens of murders in Israel’s 2015 “knife intifada.” When claims are very numerous (as the IRU’s are) and involve dangerous content, it’s not surprising that both law enforcement and courts are drawn to workarounds that bypass judicial review. 

If the goal is high-volume, high-speed resolution of speech claims, then the perfect (meaning judicial supervision) arguably becomes the enemy of the good (meaning public accountability of any sort). Some countries can try to solve the problem using cheaper compromise mechanisms, like the administrative review in some EU legal instruments. In the U.S., though, that kind of quick-and-dirty justice in disputes about speech is unlikely to be an option. It may be the courts or nothing. If courts are not an option, the alternative may be the “nothing” that is platform-defined rules and platform-operated adjudication. And perhaps, as in Israel, some quiet nudges from the authorities. 

Sympathy for Users: Who Do You Sue?

Michael Birnhack and Niva Elkin-Koren, writing presciently in 2003, called this combination of government influence and platform action “the invisible handshake.” The Israeli Supreme Court cites their work, as well as Jack Balkin’s description of a “triangular” relationship between governments, platforms and users. The court sees a glimmer of hope for users’ rights here. Even if users can’t sue their governments, the court suggests, they can still sue their platforms:

[W]here the state does not demand or impose removing or restricting expression, and the online platform operator is the one who removes the publication at its discretion, it cannot be said that it is the state that infringes the right, and in any case, those harmed have other remedies, including against the online platform operators.

That sounds reassuring. I don’t think it’s true, though. In the U.S., none of the dozens of speech-reinstatement claims filed against platforms have succeeded. I think current U.S. Supreme Court precedent pretty clearly supports that outcome. A few countries with more “horizontal” approaches to human rights have let users bring some kinds of speech claims against platforms. So far this list includes Brazil, Italy, Germany, the Netherlands and Poland. (It also briefly included Israel. A court there ordered Facebook to reinstate content based on contract law, but its ruling was formally reversed in connection with a settlement—available in Hebrew and English here.) These cases don’t give users the same constitutional rights against platforms that they have against governments, though. If they did, users could force platforms to carry all kinds of barely-legal threats, pornography, bullying, disinformation, racial slurs and more. Few internet users—or governments—would like that. 

The country with the most developed pro-plaintiff case law in these cases is Germany, where demands for platforms to reinstate content have reached the Federal Court of Justice. That court held that, because of Facebook’s dominant role, the platform could be held to some state-like obligations to protect users’ rights. Ultimately, though, these duties are state-lite at best. Facebook can still use its terms of service to take down legal speech. It just has to offer things like appeal processes to users in doing so. Other rulings have said that private terms of service, while enforceable, must be interpreted in light of free expression rights. Under any of these formulations, platforms can still take down legal speech.

The same problem arises with another avenue identified by the Israeli Supreme Court as a potential solution: an appeal to Facebook’s privately constituted Oversight Board. The board, which resolved 21 cases last year, lacks capacity to serve as a meaningful check on Israel’s IRU. Nor does it give users the rights they would have against governments. At most, it can offer users a more rights-protective interpretation of the platform’s private terms of service. 

That leaves users with no remedy in many cases where states initiate removal of their lawful speech. As I put it in my longer article, “On the one hand, governments can bypass constitutional limits by deputizing private platforms as censors. On the other, platforms can take on and displace traditional state functions, operating the modern equivalent of the public square or the post office, without assuming state responsibilities.” For those concerned about internet users’ rights, this situation merges long-standing fears about corporate abuses with even longer-standing fears about abuses by governments.

One major problem with this situation is the ways it can align platforms and governments on the same side—against the user. Laundering state power through private platforms is easiest when users don’t have enforceable rights against the platforms. Put cynically, the more discretion platforms have to surveil users and restrict their speech, the more attractive they become as cat’s paws for governments. By the same token, a government that prioritizes content regulation above other goals may have reason to forfeit competition in order to preserve a small number of regulable chokepoints for online speech. As Cory Doctorow put it, “Once it has been knighted to serve as an arm of the state, Big Tech cannot be cut down to size if it is to perform those duties.”

Humans in Government: Which Decision-Makers Should We Trust?

Israel’s Cyber Unit has one function that appears to be unique among IRUs: asking platforms to remove harassment or threats against individual state employees. In principle, that could mean asking platforms to erase anything from serious threats against a judge to #metoo allegations against a politician. Does this service of individual interests make Israel’s IRU more vulnerable to abuse or corruption than an IRU that “merely” protects the state’s more institutional interests against violent extremism? I’m not sure. There is ample documentation of states using terrorism laws as pretexts to silence dissent. Many critics say this happens in Israel. But states acting to protect individual politicians or civil servants adds another potential problem, in some ways more akin to Latin American critiques of “Right to Be Forgotten” laws than European critiques of IRUs. 

In Adalah, the IRU assured the court that it “acts with great restraint” and carefully considers things like “the reputation of the subject of the publication” before asking platforms to delete those publications. For particularly prominent people, it even adds extra layers of internal review—a mechanism much like Facebook’s own much-criticized “cross-check” system. Under Facebook’s internal system, an IRU complaint about an Israeli politician might even be escalated for review by the company’s head of policy for Israel—who herself previously worked for Benjamin Netanyahu. No similar review mechanisms, and no similar revolving doors, seem to exist for allegedly terrorist speech. 

Of course, even under rights-protective rules like that of Bantam Books, speech is assessed by judges—an elevated variety of humans, but humans nonetheless. This has proved a real practical problem in some countries. In Brazil, for example, the Supreme Court has reviewed weighty questions about online speech in the context of material attacking or criticizing the same judges involved in the case. Similar problems might arise anywhere. The Adalah court references a dispute, apparently well-known in Israel, between a mother and the civil servants and judges involved in removing her children from her custody. As a news source described the competing takes on the case, “Prosecutors are referring to the case as ‘the online terrorism affair,’ while [the mother’s] supporters call it ‘the vengeful judges case.’” Ultimately, even when courts review cases, it’s humans all the way down. But at least those humans are trained, vetted, and constrained by rules of procedure to provide a check on improper state action. That check is missing, or replaced by private platforms, for the tens or hundreds of thousands of posts deleted at the IRU’s request in Israel each year.


Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare