Cybersecurity & Tech Democracy & Elections

Why Is the Government Fleeing Key Tech Partnerships Before 2024?

Quinta Jurecic, Eugenia Lostri
Tuesday, February 27, 2024, 9:39 AM
Federal agencies are limiting communications with social media companies. That could mean trouble for the 2024 election—and for U.S. cybersecurity strategy.
A digitally created "wall" of popular social media application icons during the 21st century. (Source: Geralt, https://bit.ly/3Nl967J; CC 1.0 https://creativecommons.org/licenses/by/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In August 2018, FBI Director Christopher Wray outlined the agency’s approach to election security in advance of that year’s midterm election. Key to the FBI’s work, he said, were the bureau’s relationships not just with other government agencies but also with the private sector. Wray pointed specifically to social media companies—which, according to the director, had been collaborating with the FBI to better address “abuse of their platforms by foreign actors.” 

“It’s going to take all of us working together to hold the field,” Wray said, “because this threat is not going away.”

With time ticking down until the 2024 U.S. presidential election—and heading into a busy year for elections around the world—you’d expect those partnerships to be more important than ever. And yet, when it comes to identifying malicious foreign activity online, the U.S. government is no longer working with the private sector to hold the field. 

As NBC reported in November 2023, the FBI has stopped sharing information with social media companies concerning influence campaigns from Russia, China, and Iran. Later that month, Meta announced that the federal government had ceased providing the company with any information about foreign election interference. At the Department of Homeland Security, the Cybersecurity and Infrastructure Security Agency (CISA) has pulled back from collaborations with both tech companies and local election officials, who in 2020 drew on the agency’s support to help respond to election falsehoods originating both at home and abroad.

The federal government isn’t scaling back its work here because the danger has gone away. To the contrary, a November 2023 report from Meta warned, “Foreign threat actors are attempting to reach audiences ahead of next year’s various elections, including in the US and Europe, and we need to remain alert to their evolving tactics and targeting across the internet.” Microsoft, too, has stated that it expects Russia, Iran, and China to engage in influence operations ahead of 2024. The same report warns that sophisticated actors “likely will employ a combination of targeted hacking operations with strategically timed leaks to drive media coverage to elevate their preferred candidates.” Recently, Wray warned about foreign efforts against election infrastructure. And that’s without even addressing the potential chaos that could be caused by the explosion of use of generative AI.

Rather, reporting suggests that the government has pulled back from these partnerships with the private sector in response to a right-wing backlash against measures to identify and counter potentially harmful falsehoods and propaganda around elections—a backlash that has taken the form of political pressure, litigation, and congressional investigations. The Republican politicians pushing this campaign present themselves as fighting to protect Americans’ speech against government censorship. But the result of their work is that social media companies will head into 2024 without a key tool in their arsenal to respond to foreign interference on their platforms. And it calls into question the government’s ability to sustain the other public-private partnerships foundational to U.S. cybersecurity strategy.

As recently as October 2022, NSA Director Gen. Paul Nakasone was trumpeting the “incredible work” of the FBI’s Foreign Influence Task Force, a partnership founded in 2017 that allowed government agencies to provide actionable intelligence to social media companies, “help[ing] providers with their own initiatives to track foreign influence activity and to enforce terms of service that prohibit the use of their platforms for such activities.” But in recent months, the task force seems to have ceased communicating with social media companies. According to NBC, as of November 2023, at least two technology companies that “used to receive regular briefings from the FBI’s Foreign Influence Task Force” had not heard from the task force for months. 

That same month, Meta’s quarterly Adversarial Threat Report—the company’s regular account of efforts to clean up its platform from coordinated attacks—announced publicly that “threat sharing by the federal government in the US related to foreign election interference has been paused since July.” “Sharing information between tech companies, governments and law enforcement has … proven critical to identifying and disrupting foreign interference early, ahead of elections,” the report warned. 

Meanwhile, CISA appears to be scaling back its involvement with social media companies and efforts in assisting election officials report and respond to falsehoods. During the 2022 midterms, according to NPR, the agency’s “Rumor Control” website—begun in 2020 to respond to viral election falsehoods—“was not being updated as frequently, and seemed to be more limited in its scope.” Likewise, NPR reported, CISA “had no contact … with any social media companies” in 2022. In a June 2023 podcast interview with tech journalist Kara Swisher, CISA Director Jen Easterly spelled out her thinking: State and local election officials were capable of providing information about falsehoods to social media platforms themselves, she said, and CISA’s efforts were better spent elsewhere. CISA’s initiative for 2024 focuses on cooperating with state and local officials, offering a centralized information hub on security risks, and the hiring of 10 new experts entirely focused on elections.

Yet NPR’s reporting suggests that election officials are concerned about what 2024 may look like without assistance from CISA. In particular, election workers are feeling the absence of the Election Infrastructure Information Sharing and Analysis Center (EI-ISAC), a partnership funded through the Department of Homeland Security to help assist state and local officials with election integrity measures. During 2020, EI-ISAC helped election offices identify and report election falsehoods as well as provided cybersecurity assistance, but NPR writes that the center is now “focusing strictly on cybersecurity”—frustrating election workers who are now struggling to respond to falsehoods on their own. “There is only so much you can ask an election official to do and to do well,” a Florida elections supervisor told NPR. (According to NPR, a CISA official told reporters that “we remain committed to ensuring that state and local election officials are provided with the techniques and tactics and procedures that we know foreign adversaries are using so that they have awareness of the threats.”)

But on the podcast, Easterly explained, “I do not think the risk of us dealing with social media platforms is worth any benefit.” The “risk” she had in mind referred to unfounded criticisms that CISA had worked to unjustly censor speech during the 2020 election—a right-wing narrative that has increasingly gained steam thanks in large part to the work of Republican Rep. Jim Jordan’s Select Subcommittee on the Weaponization of the Federal Government, authorized following the arrival of the new GOP House majority in January 2023. Jordan has aggressively pursued investigations of the private researchers and academics whose work contributed to the Election Integrity Partnership, accusing them of working to censor conservatives. Democrats take a different view: Sen. Mark Warner (D-Va.), who chairs the Senate Intelligence Committee, described this approach as “legal warfare by far-right actors.”

As Kate Starbird, a professor at the University of Washington whose Center for an Informed Public participated in the Election Integrity Partnership, reflected recently in Lawfare, this campaign is “already having a chilling effect on the field of online disinformation” and may well “hamstring future efforts to support local and state election officials in countering election falsehoods.” According to the Washington Post, the number of academics and researchers affiliated with the project “may shrink and also may stop communicating with X and Facebook about their findings” going forward.

Speaking with Swisher, Easterly also defended her decision to move away from contact with social media companies as a matter of distributing CISA’s resources most effectively. “It’s not like I’m going to back the fuck down because of conspiracy theorists,” she said.

But whatever the reason, the fact is that government agencies are backing down. The halt in communications to social media companies from the FBI’s Foreign Influence Task Force dates back to a July 2023 preliminary injunction handed down by a federal district court in Missouri v. Biden, barring broad swaths of the federal government—including CISA, Homeland Security, and the FBI—from communicating with social media companies. (The case is now before the Supreme Court as Murthy v. Missouri.) The case, brought by the attorneys general of Missouri and Louisiana, challenged government efforts to encourage platforms to remove vaccine-related falsehoods during the coronavirus pandemic, describing such pressure as constituting as “the most egregious violations of the First Amendment in the history of the United States of America.” 

Judge Terry Doughty’s preliminary injunction included exemptions for communications “notifying social-media companies of national security threats” and “of cyber-attacks against election infrastructure, or foreign attempts to influence elections.” Still, government agencies chose to pull back from communications with social media companies anyway—perhaps because of the uncertain language and scope of the judge’s order. (The injunction was also riddled with puzzling misrepresentations, including quotations attributed to Election Integrity Project researchers that appear to have been taken out of context or entirely invented.) 

These restrictions are no longer in place, having been first narrowed by the U.S. Court of Appeals for the Fifth Circuit after the government’s appeal and then stayed by the Supreme Court in October after the justices agreed to hear the case. Still, though, government outreach to social media companies has remained dormant even now that the restrictions have been lifted. In an October hearing before the Senate Homeland Security Committee, both Wray and Secretary of Homeland Security Alejandro Mayorkas conveyed that Missouri v. Biden had led their agencies to constrain communication with tech companies: According to Mayorkas, his department no longer participates in meetings with platforms, while Wray stated that the FBI’s limited interactions with social media companies have “changed fundamentally in the wake of the court’s ruling.” According to NBC, the new process, instated out of an “abundance of caution,” appears to entail preapproval and supervision by Department of Justice lawyers of all FBI interactions with tech platforms. 

In the absence of clear guidance, and frightened of political pressure, the federal government is shrinking away from partnerships that will be needed to protect the 2024 election. Meta’s November report makes clear that the platform is concerned about potential foreign interference in 2024. Among the influence campaigns identified and removed by the platform was a China-based effort targeting both Democratic and Republican politicians, including prominent members of the GOP such as Rep. Jordan and Florida Gov. Ron DeSantis, who has railed against supposed censorship by social media companies.

This doesn’t mean that the state of things prior to the Missouri injunction was ideal. The case raises serious and unsettled First Amendment questions. The extent to which the government may permissibly lean on private entities to remove content—what’s become known as “jawboning”—is far from clear, and in some situations, this practice could generate real concerns about free speech. It’s true, as well, that government efforts to counter online propaganda can sometimes express an unearned level of certainty about just how effective that propaganda was in the first place. And the government has made missteps of its own—such as the rollout of the poorly named Disinformation Governance Board, which Homeland Security quickly shuttered after backlash from the right. 

Still, the chaos and confusion currently shaping this landscape are to nobody’s advantage. The partnerships that have developed in the past few years weren’t perfect, but they were established for a reason after the chaos of Russian election interference in 2016. Prior to that year, the federal government and social media companies engaged in relatively little coordination when it came to foreign manipulation online. The government had intelligence on foreign interference but didn’t have internal data about how platforms were being used; the platforms had the data but no access to the intelligence. “We had no help from the government,” Alex Stamos, who in 2016 worked at Facebook as the company’s chief security officer, recalled in a recent episode of Stanford’s Moderated Content podcast. “The government told us absolutely nothing about what Russia and other countries were doing on the platforms.”

It remains unclear to what extent the efforts of the St. Petersburg-based Internet Research Agency’s efforts actually had any effect on American voters. (The GRU’s hack-and-leak operation raises different questions.) But the debate around social media and election integrity served as a wake-up call for both platforms and government agencies. “The U.S. Intelligence Community’s ability to identify and combat foreign influence operations carried out via social media channels has improved since the 2016 U.S. presidential election,” the Senate Intelligence Committee wrote in a bipartisan 2019 report released under Republican Chairman Sen. Richard Burr. “Communication and information sharing between government agencies and the social media companies has been a particular point of emphasis, and the Committee strongly supports these efforts.”

What is clearer is the extent to which the now seemingly abandoned collaborations bore fruit. In 2017, the FBI launched the Foreign Influence Task Force to “identify and counteract malign foreign influence operations targeting the United States.” That same year, Homeland Security designated election infrastructure as part of the nation’s critical infrastructure—meaning that its “incapacitation or destruction” would be a “devastating” blow to the United States. “Building resilience to foreign influence operations and disinformation” is part of CISA’s responsibilities in supporting that infrastructure. And 2018 saw the launch of the EI-ISAC. (Even more recently, in 2023, the Department of Justice established the National Security Cyber Section within the National Security Division, which will oversee the investigation and prosecution of both cyberattacks and “cyber-enabled” threats to elections, meaning social media influence efforts.)

“[P]roactive information sharing with social media companies facilitated the expeditious review, and in many cases removal, of social media accounts covertly operated by Russia and Iran,” reported the Office of the Director of National Intelligence (ODNI) in a declassified March 2021 assessment of foreign attempts to interfere in the 2020 election. Among the splashier foreign influence campaigns was an Iran-based effort, identified publicly by the FBI and the ODNI in October 2020: According to the Justice Department, two Iranian hackers sent threatening emails to U.S. voters, claiming to be from the Proud Boys, after hacking into voter information from a state election website. Meta removed material related to this campaign from its platform after receiving the FBI’s tip-off. 

As part of the Iranian campaign, the Justice Department alleged, the hackers also messaged politicians, campaign workers, and journalists—claiming to show evidence that Democrats were exploiting “serious security vulnerabilities” to hack into election systems and steal votes. This underlines an added complication for government efforts to protect elections: Cyberattacks need not be successful to be effective. Claims that election infrastructure has been compromised—regardless of their veracity—can contribute to diminished confidence in election results. 

While these new partnerships focused largely on propaganda from abroad, they also helped election administrators and platforms address falsehoods coming from within the United States. The EI-ISAC provided state and local officials with a centralized clearinghouse for reporting falsehoods—such as a rumor that election workers in Arizona’s Maricopa County had deceived voters into filling out ballots with Sharpie pens that couldn’t be read properly by voting machines. The Election Integrity Partnership tracked how these rumors spread and alerted social media platforms of falsehoods that might violate their policies. CISA’s “Rumor Control” service debunked falsehoods like “Sharpiegate” in real time. (The agency’s enthusiasm for batting back false claims of election theft would lead President Trump to fire CISA Director Chris Krebs on Nov. 17.)

These efforts, obviously, didn’t work perfectly. While Election Day itself went off relatively smoothly, lies and rumors about a stolen election, egged on by Trump, coalesced into the “Stop the Steal” movement and exploded into violence on Jan. 6, 2021. Still, this collaboration represented a major step forward from 2016, something to build and iterate on for future elections. It should have been ready to tackle the challenge of the next presidential election in 2024. Instead, the government has pulled back.

What’s more, this hasty retreat also raises questions about the government’s ability to maintain crucial partnerships outside the space of social media influence. Over and over, the U.S. government has advocated for a whole-of-society effort to counter the range of cyber threats effectively. Public-private collaboration is a key aspect of cybersecurity policy; in fact, scaling such partnerships is a strategic objective in the Biden administration’s National Cybersecurity Strategy. While the private sector benefits from broader intelligence about ongoing threats that the government can offer, companies provide the government with “as an early warning system to cyber threats, a partner in remediation, and a collaborator in new defense strategies.” 

Cybersecurity and social media influence operations are different beasts, but sudden changes in the government’s approach to one can have an effect on the other. If companies feel that the government isn’t serious about addressing these threats, it might feed private-sector skepticism of other cybersecurity initiatives demanding similar collaboration. With Republican politicians framing even benign conversations between government agencies and technology companies as evidence of a nefarious plot, it’s reasonable to imagine that other private entities would think twice about engaging with government outreach. Who wants to put their employees in danger of being doxxed, harassed, and dragged before Congress, as Twitter’s former trust and safety leadership has experienced?

So far, cybersecurity has largely remained outside the political and cultural hurricane engulfing social media companies. But in the electoral context, it’s not always so easy to distinguish between worries about social media influence and cybersecurity concerns—as the Iranian effort in 2020 shows, not to mention the overlapping GRU hack-and-leak and Internet Research Agency social media operations in 2016. It’s not hard to imagine a comparable effort in 2024 that falls along political fault lines such that a cybersecurity firm becomes a convenient target for the ire of the right. Recall how, in 2019, a widespread conspiracy theory on the right posited that cybersecurity company CrowdStrike had coordinated with the Democratic Party to frame Russia for the 2016 hack of the Democratic National Committee. 

Already, there are indications that the campaign against “censorship” is expanding to engulf public-private partnerships more broadly. In December 2023, Jordan’s subcommittee held a hearing criticizing the nonprofit Cyber Threat Intelligence (CTI) League as an agent of government censorship—no matter that the focus of the group, which has collaborated with CISA, is on protecting entities like hospitals from cyberattacks. The “GOP’s contention that everyone is out to censor their [F]irst [A]mendment rights is starting to take aim at info sharing efforts between the private sector and the [government] to fight Internet and computer security threats,” warned cybersecurity expert Brian Krebs following the hearing. According to Politico, the attacks on CTI League have caused problems within the Joint Cyber Defense Collaborative (JCDC), a program launched by CISA to marshal private-sector resources to respond to cybercrime, as JCDC participants worry that CISA won’t stand up for them if right-wing outrage turns their way. “You want us to go to battle on a dangerous battlefield, and we don’t know if you’re actually going to show up alongside us,” CTI League founder Marc Rogers told Politico. 

While the government’s sudden vanishing act raises questions about its commitment to these partnerships, social media platforms aren’t blameless, either. Going into 2024, many social media companies have slashed their trust and safety teams in the name of increasing efficiency. In 2023, Twitter (and X), Google, and Meta all laid off a significant portion of the teams focused on countering false information online. And policy rollbacks at Meta, Twitter, and YouTube have also weakened existing protections against the spread of election falsehoods. 

In other words, both the public and private sides of this public-private partnership are buckling under strain. Caught in the middle are the people—government officials, election workers, academic researchers, and tech employees working in trust and safety—trying to do their best to ensure a safe election. How much can they accomplish without confidence in the institutions that support them?


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Eugenia Lostri is Lawfare's Fellow in Technology Policy and Law. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.

Subscribe to Lawfare