Cybersecurity & Tech Foreign Relations & International Law Surveillance & Privacy

Microsoft’s Crackdown on Unit 8200 Reveals Tech’s Intermediary Role

Yotam Berger
Wednesday, October 8, 2025, 12:00 PM
Microsoft’s recent action against Israel’s Unit 8200 demonstrates Big Tech’s growing ability to limit state-operated surveillance practices.
Former Israeli Minister of Defense Yoav Gallant (left), Israeli Prime Minister Benjamin Netanyahu (center), and Former Chief of the General Staff Herzi Halevi (right) (Photo: IDF/WikiMedia Commons, https://tinyurl.com/3m4byvw7, CC BY-SA 3.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Sept. 25, Microsoft announced that it would be blocking Unit 8200—Israel’s elite intelligence arm—from a cloud service that was used to operate a surveillance program aimed at collecting Palestinian data. The incident underscores technology giants’ growing role as surveillance intermediaries.

Microsoft’s announcement shows that private entities, not just governments, may employ “kill switches,” and that they may use them to limit state-operated surveillance programs. The increasing reliance of governments on these companies for national security and law enforcement purposes gives tech giants new practical ability to regulate surveillance practices. But these new powers come with new responsibilities.

Microsoft Blocks Unit 8200

Israel’s elite intelligence arm, Unit 8200, has long been involved in mass surveillance and bulk data collection. The unit is known for producing extraordinary technical talent, and its veterans are highly sought after in the civilian sector, due in large part to their unique experience with advanced military-grade technologies. It is no secret that the unit has ties with civilian tech companies, including Microsoft. In May, following public scrutiny that prompted an internal review, Microsoft released a statement declaring that it had “long defended the cybersecurity of Israel and the people who live there” and that its “commitment to human rights guides how we engage in complex environments […]. Based on everything we currently know, we believe Microsoft has abided by these commitments in Israel and Gaza.”

In August, however, Unit 8200’s surveillance practices—and its cooperation with Microsoft—again drew public attention. A joint investigation by the Guardian and the Israeli outlet +972 Magazine revealed that the unit stored enormous troves of Palestinian phone calls on Microsoft’s servers in Europe. Beginning in 2022, the unit reportedly used Microsoft’s Azure cloud service to store thousands of terabytes of intercepted phone calls between Palestinians, in both Gaza and the West Bank. Intelligence sources cited in the report said Unit 8200 turned to Microsoft after concluding it lacked the infrastructure “to bear the weight of an entire population’s phone calls.” By July, the Guardian noted, 11,500 terabytes of Israeli military data were stored on Microsoft servers in the Netherlands. The program was reportedly designed to allow the analysis of “millions of phone calls each day.”

The revelations prompted further scrutiny. On Aug. 16, reports emerged that Microsoft had launched a formal review of the allegations. On Sept. 25, the company announced that it had terminated Unit 8200’s access to parts of its infrastructure. According to the Guardian’s follow-up reporting, Microsoft executives told Israeli officials that “while [their] review is ongoing, we have at this juncture identified evidence that supports elements of the Guardian’s reporting” and that Microsoft “is not in the business of facilitating the mass surveillance of civilians.” In an update to employees posted publicly the same day, Microsoft Vice Chair and President Brad Smith confirmed that the company had found evidence supporting the allegations and had therefore informed the Israeli military of its decision “to cease and disable specified […] subscriptions and their services, including their use of specific cloud storage and AI services and technologies.”

The Israel Defense Forces (IDF) did not provide journalists with an on-the-record response at the time, but several Israeli news outlets quoted “military officials” as saying that Unit 8200 anticipated the move. According to these officials, the data had been backed up before access was cut off, and “there is no damage to the operational capabilities of the IDF.”

Still, Israeli newspapers emphasized that the move represents a potential “change of approach.” Microsoft—and many other tech giants—have long seen Israel not only as a development hub but also as a key market. This, some reporters have pointed out, marks the first time a major U.S. technology company has “openly imposed sanctions on Israel.” 

Tech Giants as Surveillance Intermediaries

Microsoft’s move can be situated within the broader framework of technology companies’ emerging role as “surveillance intermediaries,” a concept developed by Lawfare’s Alan Rozenshtein in a 2018 Stanford Law Review article. Under this theory, tech giants play a crucial role in enabling state surveillance, but—unlike historical equivalents such as telecommunications companies—they are less willing to cooperate unconditionally. Instead, they often position themselves as intermediaries that constrain the power of the “surveillance executive.”

Rozenshtein identifies both financial and ideological incentives behind this behavior and describes techniques of resistance that companies employ to check executive power, including technological unilateralism (making technological changes to systems and devices that are adverse to the government’s preferences), policy mobilization (lobbying and engaging in law and policymaking), and proceduralism and litigiousness (insisting on certain procedural aspects, and resisting certain policies and acts in courts). A classic example of the latter two was Apple’s refusal—and subsequent resort to the courts—when the FBI sought to compel the company to unlock the San Bernardino shooter’s iPhone.

The 8200-Microsoft dispute can also be understood within this theoretical framework. Indeed, in the typical “surveillance intermediary” case, governments do not choose whether to rely on a tech company; rather, companies simply possess—or have the ability to develop— the tools the government needs to access a device or data. The 8200-Microsoft affair, however, demonstrates that tech giants can also facilitate mass surveillance by providing infrastructure and active cooperation to governments, and can accordingly end up “pulling the plug” on such programs. In this regard, too, they may serve as surveillance intermediaries.

Microsoft’s role as a potential surveillance intermediary must be considered in light of Unit 8200’s current standing in Israeli society—and its inability to regulate itself. The Oct. 7, 2023, Hamas attack on Israel, in which terrorists killed approximately 1,200 people and kidnapped around 250, generated intense criticism of the IDF’s failures, particularly its intelligence services, within Israeli society and political ecosystem. Despite the IDF’s—and Unit 8200’s—advanced technological capabilities, the military failed to anticipate or prevent the attack.

Critics have suggested that the Israeli military, and Unit 8200 in particular, had become overreliant on technology—entrapped in a “conception” that blinded it to real threats. As Ran Heilbrunn argued in American Affairs, “Unit 8200 had closed itself off in an epistemological feedback loop, rationalizing its ignorance by proclaiming that STT programs and access to Google Search could substitute for fluency in Arabic.” The unit’s commander at the time, Yossi Sariel, resigned in September 2024 over the failures surrounding the attack. Sariel was also reportedly the official who, well before the attack, initiated the program that Microsoft has now blocked.

This context illustrates the potential internal pressures within the Israeli national security establishment to expand surveillance. Facing harsh public criticism for its failures, Unit 8200 could have strong incentives to adopt increasingly intrusive practices and exploit every tool at its disposal. While the program with Microsoft predated Oct. 7, it is easy to imagine such efforts expanding in the aftermath of the attack, driven by the fear that another catastrophic assault could be planned without the unit’s knowledge. Given that context, we cannot assume inner checks and balances within the national security establishment will seriously function as effective watchdogs.

Big Tech’s New Power—and New Responsibility

In such situations, there must be an outside actor serving as an effective regulator—for instance, demanding that security practices comply with both domestic and international law. Yet it is difficult to imagine any governmental or executive agency reliably playing this balancing role, second-guessing steps ostensibly taken to enhance national security.

Courts, too, are an uncertain resort. In many democracies, courts display a tendency to defer to the executive on matters of national security, applying judicial review in a more limited fashion. The Supreme Court of Israel, historically known for its relatively interventionist stance in the 1990s and early 2000s, has nevertheless faced the same limits and challenges as its counterparts abroad when adjudicating national security cases. This does not mean that preventive and balancing mechanisms within the national security and law enforcement establishment are irrelevant. It just means they have limits and that technology giants should be aware of these limits when negotiating with governments.

This dynamic, then, creates a cycle of ever-expanding security and law enforcement practices—driven by advancing technology on the one hand and by the absence of robust oversight mechanisms on the other. When courts, internal watchdogs, and other executive agencies fail to serve as meaningful checks, the role of tech giants as surveillance intermediaries becomes uniquely important, creating extraordinary responsibility.

Until now, the role of technology companies as surveillance intermediaries has been most visible in disputes over disclosure of data already in their possession. Apple’s refusal to unlock the San Bernardino shooter’s iPhone is a prominent example. The Microsoft-8200 affair, however, highlights another dimension: Tech companies may not only control access to data but also provide the very infrastructure that enables surveillance to occur at scale in the first place.

Tech giants must therefore ensure that they do not cooperate in illegal, forbidden, or illegitimately oppressive surveillance practices. They now have the power to prevent such cooperation—but with that power comes profound responsibility. Their obligations extend not only to shareholders but also to the broader public: refraining from terminating programs that are lawful and justified, while also refusing to work with those that are overly abusive.

A related question is how well positioned tech giants are to adjudicate these issues themselves. Often, when we think about an entity’s ability to “pull the plug” on a program, we consider the government’s ability to do so. This affair demonstrates that private entities may have that power too, not only in theory but also in practice. As noted, it seems reasonable to expect that governments and national courts—especially in times of emergency or war—will tend to approve the use of highly intrusive surveillance tools. But how should tech giants exercise their discretion in similar circumstances? And where should one draw the line? If Microsoft opposes using Azure in order to facilitate a wide surveillance program, does it oppose it just because this is a cloud service, or because it opposes the surveillance program itself? If the latter is true, should Microsoft or any other firm be worried also about uses of other products—such as Word, Excel, or even Windows itself—by the government? The line could be easily blurred, and transparent policies in this context could prevent future harm to both human rights and national security, and possibly, even to tech giants’ business.

Further, when a security agency’s very infrastructure depends on a private firm, that firm bears an even greater responsibility to wield its power responsibly. If tech giants allow security agencies to store data on their private servers, they must be extraordinarily cautious before cutting off access. Should such practices become common, security agencies may ultimately choose to avoid reliance on private actors—particularly international ones—when conducting national security operations.

A situation in which a private company can unilaterally terminate a contract with a security agency—with or without notice, and after having stored sensitive data—inevitably has implications beyond a single case. If private firms begin taking such actions more frequently, the effect will not be limited to Israel’s reliance on the private sector for defense contracts. Other nations, too, will be forced to act more cautiously when cooperating with private actors. This raises questions regarding the potential role of the jurisdictions in which these firms operate or are incorporated. U.S. lawmakers, for instance, could consider whether the federal government should have a voice in how such decisions are made, and whether state-based regulatory mechanisms could oversee the way tech giants exercise this discretion. American governmental involvement in this regard may be required but may also lead to hasty limitations on private entities. When approaching this field, lawmakers should be well aware of the benefits as well as the dangers of having a third-party, de facto regulator of surveillance programs.

This domain—how and whether the law should structure private firms’ discretion when making decisions with profound consequences for public safety—remains underdeveloped. Future legal research could help design mechanisms to guide tech giants in carrying out their expanding role as surveillance intermediaries, with the care necessary to allow governments to protect public safety while safeguarding human rights. It is better to address this emerging issue early on, including by creating transparent policies that clearly state what should and should not be done using privately made technology.


Yotam Berger is a J.S.D. candidate at Stanford Law School, where he is a Stanford Interdisciplinary Graduate Fellow and a Knight-Hennessy Scholar. He previously clerked at the Supreme Court of Israel, worked for Israel’s Deputy State Attorney, and served as Haaretz’s West Bank correspondent. His research examines cybersurveillance and the evolving relationship between law enforcement, Big Tech, and the commercial spyware industry.
}

Subscribe to Lawfare