Published by The Lawfare Institute
in Cooperation With
The Russian invasion of Ukraine triggered a flurry of diplomatic, political, economic and military responses. But more than in previous geopolitical crises, tech giants’ policies and sanctions have played a major role in the Ukraine conflict, alongside those of states and international organizations. Companies like Meta, Google, Apple, Microsoft, Twitter and even TikTok increasingly recognize that they cannot afford to sit geopolitical crises out.
The war in Ukraine is the most dramatic instance yet of platforms’ geopolitical turn—their growing engagement with security and geopolitical challenges incidental to their business operations. Platforms came a lot more prepared for the war in Ukraine compared to previous major geopolitical inflection points. They have coordinated their actions with Western governments and other international actors leading the charge against Russia.
Yet platform action and platform-government cooperation are taking place on an ad hoc basis. The Ukraine crisis once again illustrates that platform-government security and geopolitical interdependence is inevitable and significant. It is time for governments and platforms to build better long-term institutions and procedures for integrating tech giants into security and geopolitical policymaking. A major challenge is to design such institutions with safeguards against potential abuses of the platform-government security nexus.
Platforms’ Geopolitical Turn
In 2016, an aggressive foreign influence campaign to undermine the U.S. presidential elections blindsided Facebook, Google, Twitter and other key players. In 2017, Facebook contributed to mass atrocities in Myanmar and Thailand by failing to stop incitement to violence on its platforms. In 2019, the Christchurch terrorist attack brought attention to terrorist misuse of platforms and mobilized international action.
As a result, tech giants have rightly been criticized for failing to do enough to mitigate the harm that their products cause around the world. A litany of whistleblowers and congressional hearings exposed a lack of attention to, and resources for, preventing harm to users, communities and democracies; a tendency to prioritize profit above all else; a business model that thrives on polarization; and an overwhelming focus on harm prevention in Western countries at the expense of others.
Tech giants’ trust and safety capacity-building in response to these events has received far less attention. As I documented in previous research, Facebook, Google, Twitter and others have scaled up their security and geopolitical operations to better address threats that implicate their products. They have replicated traditional structures and policy methods familiar from state security and foreign policy bureaucracy and repositioned themselves as national security and geopolitical actors.
It may be too little, too late, but this development is important. Platforms now collect and analyze intelligence on a variety of threats, often in cooperation with law enforcement. Among other policies, they designate “dangerous” organizations and individuals much like governments designate alleged terrorists. They have “coordinated behavior” policies to weed out fake accounts and counter information operations backed by foreign governments. They participate in an international organization—the Global Internet Forum to Counter Terrorism (GIFCT)—to attempt to combat online terrorism. And they worked closely with governments to protect election security in recent major global elections, including the 2020 U.S. elections.
As a result of this reorientation, platforms have begun to deal with quintessential geopolitical and security problems that have traditionally been the purview of states. And platform choices shape the policy options available to states. For example, platforms were among the first to publicly comment on whether the Taliban should be recognized as the legitimate government of Afghanistan following the U.S. withdrawal. In other instances, they had to decide what to do about Myanmar’s military after the 2021 coup, how to prevent escalation during the June 2021 hostilities between Israel and Gaza, and how to react when a U.S. election devolved into violence with the president’s tacit blessing.
Platforms’ geopolitical turn has been on full display since Russia invaded Ukraine. Strikingly, the major platforms immediately chose a side in the conflict and took concrete action. For example, Meta said it had opened a Ukraine war room. The company restricted access to Russian state-controlled media, banned advertising and monetizing by those entities, downranked and labeled related posts, and took down Russian-backed information operations targeting Ukraine. Meta has regularly communicated with governments, including the Ukrainian government. The company faced Russian retaliation for these steps. Russia has restricted use of Facebook and Instagram.
Twitter took actions similar to Meta’s. Google’s Threat Analysis Group (TAG), a unit that counters government-backed hacking against Google users, said it “has been working around the clock” to monitor and address cyberattacks in Ukraine. Shortly before Russia invaded, Microsoft’s Threat Intelligence Center alerted the Ukrainian government—and the U.S. National Security Council—that a massive cyberattack against Ukraine was coming. Within hours, Microsoft had neutralized the attack and coordinated with NATO and the European Union.
Platforms’ significant role in the Ukraine war highlights the importance of public-private cooperation in addressing security and geopolitical threats in a networked world. Often, platform-government cooperation is driven by the fact that tech companies are better than the government at identifying issues with their own products and services. They can identify threats in real time. They know their technology and response options better than the government. And private companies are generally more nimble than government agencies.
Moreover, constitutional and other legal constraints impede the government’s ability to control what happens on private networks and what private actors do—especially beyond those governments’ own jurisdiction. Sometimes governments seek to avoid direct confrontation with a foreign nation by coordinating with private actors in lieu of direct government action.
This is probably why U.S. Deputy National Security Adviser Anne Neuberger reportedly played matchmaker for Microsoft and European nations to defend against Russian cyberattacks instead of taking matters into the government’s hands. Letting Microsoft neutralize Russian malware targeting Microsoft products used by Ukrainian authorities and potentially other European nations allowed for speedy action, capitalized on Microsoft’s expertise without resorting to legal coercion, and averted the possibility of direct cyber confrontation between U.S. and Russian government assets at an extremely volatile juncture.
Tech platforms will continue to play a pivotal role in international security and geopolitics as both conduits for governments and independent powerful actors whose interests may conflict with those of governments. But the conversation about governing tech platforms has largely emphasized free speech, competition and privacy concerns. Policymakers and scholars alike must also strive to better understand platforms’ role in the national and global security architecture. This will require moving beyond sporadic, unaccountable, crisis-driven engagement between tech platforms and governments in this area.
Approaching the platform regulation problem through an international security lens illuminates different regulatory concerns than the ones that have dominated the platform governance debate. For example, commentators have criticized soft cooperative arrangements such as the GIFCT and the platform-government election security working group that met regularly in the runup to the 2020 U.S. elections. Their critique emphasized free speech concerns. These commentators warn that such cooperative arrangements create content cartels and invite extensive censorship. Yet given that governments depend on platforms to address important security and geopolitical challenges effectively, soft cooperative arrangements might be unavoidable. They allow long-term coordination and may create mutual accountability. Over time, they can help develop norms and procedures for platform security and geopolitical governance.
This cooperation should build-in accountability and oversight mechanisms. For example, the U.S. election security working group operated voluntarily without any transparency or oversight mechanisms. A better model should include statutory authorization for cooperative platform-government institutions—a more detailed and tailored version of the authorization provided in the Cybersecurity Information Sharing Act of 2015 and the Cybersecurity and Infrastructure Security Agency Act of 2018 for cybersecurity cooperation with the private sector. Such authorization should impose limits on the sharing of individual user information in platform-government policy and threat analysis interactions that isn’t otherwise authorized by statute. It should also include reporting requirements on the content of the exchanges. If public disclosure is impossible given the need for secrecy in the security space, government agencies should be required to report to Congress about the content of their security and geopolitical exchanges with platform officials.
Antitrust is another key concern that has dominated the platform regulation debate. “Break Up Big Tech” has been the battle cry of prominent critics of platform power. But from a security vantage point, platform size and the dominance of several major Western players might be an advantage under the right leadership. The fewer players involved in policing online threats, and the more they stand to lose if key regulators deem them irresponsible, the easier it is for governments to build partnerships and coordinate public-private responses to geopolitical and security challenges.
None of this means that security concerns must always prevail in devising platform regulation. It’s important to recognize the limitations of institutionalizing platform-government cooperation. First, the call for structuring platform-government cooperation should be limited to democratic states. Clearly, platforms should not be the long arm of authoritarian regimes or used to trample political opposition and “enemies of the state” in countries where democracies are fragile.
Even in Western democracies, there are legitimate concerns with platform-government security cooperation: What if platforms secretly implement dubious government requests out of self-interests? What if government jawbones platforms into taking legally questionable steps? How to protect personal user information in these interactions? How to prevent government capture by platforms as a result of tighter, more structured cooperation that would complicate advancing non-security-related societal interests? What happens when platform and government interests diverge? What happens when platforms must adjudicate between rival parties, domestically or internationally, as they did when they decided to deplatform former President Trump?
Acknowledging these challenges, however, does not mean policymakers and commentators can deny that at least some level of platform-government cooperation is inevitable. That cooperation must be structured for the long run with a view to balancing security and geopolitical interests with necessary restrictions on both government’s and platforms’ immense power.