The State Department’s X Directive and the End of Platform Independence
A cable endorsing a social media platform by name as a tool of U.S. diplomacy and military psychological operations would have been unthinkable—until recently.
Secretary of State Marco Rubio signed a cable this week directing U.S. embassies and consulates around the world to launch coordinated campaigns countering foreign propaganda. The cable explicitly endorses Elon Musk’s X as an “innovative” tool for the effort and instructs U.S. diplomatic posts to align their work with the Pentagon’s Military Information Support Operations (MISO), known as Psyop, the military’s psychological operations unit. Rubio identifies five operational goals—countering hostile messaging, expanding information access, exposing adversarial behavior, elevating local voices sympathetic to U.S. interests, and “telling America’s story”—and instructs embassies to recruit local influencers and community leaders to carry U.S.-funded narratives in ways designed to feel organically local rather than centrally directed.
The idea that the State Department would issue a formal cable endorsing a specific social media platform by name as a tool of U.S. diplomacy—let alone military psychological operations—would have been, until recently, almost unthinkable. But the structural transformation that has taken place over years has made the news feel almost ordinary today. It was a transformation that dismantled, piece by piece, the legal accountability, operational independence and institutional resilience that once made such a cozy relationship between government and platforms inconceivable.
What makes this cable remarkable is the extent to which it represents a departure from how U.S. technology platforms have historically interacted with state power—including with the U.S. government. For decades, U.S.-based social media companies operated as something closer to institutional rivals of government control over online speech, foreign or domestic. Google famously clashed with the Chinese government over censorsing its search engine and ultimately redirected its Chinese operations to Hong Kong rather than comply with censorship demands. Facebook and Twitter both resisted Brazilian court orders to remove content and to identify users. Twitter—before its acquisition—went to court to resist government data requests, publishing regular transparency reports and fighting national security letters that came with gag orders. These companies were imperfect actors, but their general posture was to resist governments that sought to use their platforms as instruments of state messaging.
There are deep normative and legal reasons why this posture of resistance took hold. Normatively, U.S. tech companies are, almost inevitably, products of a legal and cultural environment shaped by the First Amendment—one that treats government interference with speech, particularly compelled or state-directed speech, as illegitimate. Even as private companies not bound by the First Amendment’s limits on government restrictions, the broader. legal culture created an institutional bias against both the wanton removal of user speech and from yielding editorial or algorithmic control to any government actors.
Within the United States, those norms were backed by law. The government’s ability to pressure private platforms into moderating content on its behalf—so-called jawboning—faced serious constitutional scrutiny under the First Amendment. That scrutiny reached its apex recently in Murthy v. Missouri, a case driven by conservatives who alleged that the Biden administration had improperly coordinated with social media platforms to suppress speech on online platforms. The case reached the Supreme Court, where Justice Amy Coney Barrett, writing for the six-justice majority, expressed deep skepticism of the plaintiffs’ evidentiary foundation—finding that the plaintiffs lacked standing because they had failed to adequately trace their alleged injuries to specific government communications.
The Court declined to find that the Biden administration’s interactions with platforms rose to the level of unconstitutional coercion, leaving the underlying legal framework largely unresolved. However, the decision dealt a blow to the factual allegations of a censorship campaign. Today, the same political coalition which pursued Murthy on the theory that government-platform coordination constitutes unconstitutional censorship now presides over an administration openly directing a private platform to serve state messaging objectives.
Despite those legal and normative guardrails, the potential to shape political opinion through control over algorithmic platforms was never lost on authoritarian governments. China’s Great Firewall represents the most comprehensive effort at state information control in the digital age—not just blocking foreign platforms but also manipulating domestic ones such as WeChat and Weibo to surface state-approved content, disappear dissent, and surveil political opposition. Russia’s efforts have been oriented more externally, deploying influence operations across Western platforms through the Internet Research Agency while simultaneously restricting domestic platforms.
Hungary under Viktor Orbán engineered effective state control over much of its media ecosystem, including social media advertising flows and algorithmically amplified pro-government content funded by the state. What these regimes understood long before Western democracies fully reckoned with it was that the architecture of social media—recommendation algorithms, content moderation decisions, amplification mechanics—is not neutral. It is a political instrument. The question was never whether social media could be weaponized; it was only who would wield the controls.
Elon Musk’s acquisition of Twitter in 2022 was the pivot point. The structural consequences of that transaction alone were significant: By taking a publicly traded company into private ownership, Musk removed Twitter’s content moderation decisions from a number of key restrictions: from transparency obligations imposed by SEC disclosure requirements to the pressure of a corporate board, which represented public shareholders attuned to brand safety and advertiser concerns. Gone was the quarterly accountability to institutional investors who cared about whether the platform’s moderation practices could create legal or reputational liability.
And the structural change at Twitter was followed by an operational gutting. Musk systematically dismantled Twitter’s trust, safety, and content moderation infrastructure. The teams that had worked, however imperfectly, to maintain platform integrity not just for commercial reasons but to limit the spread of coordinated inauthentic behavior, state-linked influence operations, and targeted harassment were gone within months of Musk’s ownership. With both the corporate accountability architecture and the internal operational safeguards stripped away, the platform's amplification and suppression mechanics became, in effect, tools that could be deployed at anyone’s, but namely Musk’s, discretion. The result was a platform that could promote or bury content not because of policy but because of preference—and whose owner was, by late 2024, one of the most politically influential figures in U.S. politics.
Trump’s return to the presidency in January 2025 marked a corresponding shift on the government side of the equation. The Republican Party had spent years positioning itself as the defender of online free speech. Rep. Jim Jordan (R-Ohio) branded the crusade as fighting the “censorship industrial complex,” an alleged network of government actors, nongovernmental organization, and platform trust-and-safety teams supposedly working in concert to suppress conservative voices. The Murthy litigation was the legal expression of that project. But once in power, the Trump administration pivoted with remarkable speed from being a critic of government-platform entanglement to its most enthusiastic architect. The tech industry, for its part, proved willing partners—and the administration proved willing to deliver on its bid to push the online world to the right.
For years, major technology companies, and social media platforms in particular, sought favorable treatment from Washington on issues ranging from antitrust enforcement to international market access. Those efforts had yielded limited returns across multiple administrations, including in Trump’s first term, partially because political hostility to the tech industry cut across ideological lines. The second Trump administration changed that calculus entirely. Trump made it unmistakably clear that his government was prepared to advocate for U.S. tech companies’ interests both at home and abroad.
That advocacy was visible in almost every theater simultaneously. Vice President Vance traveled to Paris for the AI Action Summit in February 2025, where he delivered remarks that framed European digital regulation as a form of censorship. He reprised the censorship theme weeks later at the Munich Security Conference, warning European allies that the Digital Services Act (DSA)—the European Union's broad framework for regulating large online platforms—was incompatible with the U.S.’s free speech values. Brendan Carr, the chairman of the Federal Communications Commission, followed with his own European tour, including remarks in Spain that echoed the administration’s criticism of the DSA.
These were not routine diplomatic exchanges. The administration was explicitly intervening on behalf of U.S. tech companies against the regulatory sovereignty of allied democracies. Jordan, meanwhile, redirected the machinery of his House Judiciary Committee—which had spent years mining the domestic “censorship industrial complex”—to conduct hearings on the foreign censorship of U.S. tech companies, even as critics noted that the EU regulations in question applied only within European borders, to companies operating by choice in European markets.
At home, the administration moved aggressively to provide structural favors. The Federal Trade Commission under former Chair Lina Khan had effectively placed a moratorium on major technology mergers through its litigation strategy, chilling acquisitions by companies, including Meta, Amazon, and Microsoft. The change in administration brought a much more relaxed enforcement posture, with major deals in cloud computing, artificial intelligence infrastructure, and digital advertising given a new runway. Meanwhile, the administration brokered the forced divestiture of TikTok from its Chinese parent ByteDance—framing the move publicly as a national security measure, but structuring it in a way that transferred ownership of one of the world’s most powerful recommendation algorithms to a consortium of American billionaires, ensuring it remained within the orbit of domestic political influence.
Given the extent to which the Trump administration demonstrated a willingness to accommodate the tech sector—in regulatory forbearance, international advocacy, antitrust relief, and the engineered transfer of a major platform asset—it would have been surprising and uncharacteristic if the administration did not expect something in return.
What came back was both financial and operational. Technology executives contributed lavishly to Trump’s second inauguration, with figures including Meta’s Mark Zuckerberg, Amazon’s Jeff Bezos, and Google’s Sundar Pichai making prominent donations to inaugural events. Facebook reached a settlement with Trump in a lawsuit he had filed over his post-January 6th suspension, reportedly agreeing to pay $25 million, which Trump himself claimed “brought [Zuckerberg] into the tent.”
But the exchange extended beyond money into the architecture of speech itself. Documents produced in ongoing litigation revealed direct communications between Zuckerberg and Musk—at the time Musk served as a federal employee through DOGE.iThe two discussed the removal of specific categories of content from Meta's platforms at Musk’s apparent direction. Around that time, X accounts associated with critics of the Trump administration found themselves suspended, throttled, or algorithmically buried—not in any systematic, policy-driven way, but in patterns consistent with discretionary enforcement against political opposition. Among those affected were journalists, national security reporters, and civil society organizations whose content experienced dramatic reach suppression following criticism of Musk or the administration’s policies.
The Rubio cable now makes the arrangement explicit. The State Department has formally directed U.S. diplomatic posts to coordinate anti-speech operations with a specific, privately owned social media platform and with the Pentagon’s psychological operations apparatus. The cable’s endorsement of X’s Community Notes feature—a crowdsourced moderation mechanism that has been the subject of significant academic criticism for susceptibility to coordinated manipulation—as an instrument for countering "anti-American propaganda operations without compromising free speech" is, at a minimum, a remarkable exercise in circular reasoning: the government endorsing, for use in state-directed information operations, a moderation tool on a platform owned by a former (and perhaps still current) senior government advisor.
What this trajectory reveals is less a sudden rupture than the culmination of a series of structural changes that individually appeared incremental but collectively transformed the relationship between state power and platform architecture. The privatization of Twitter removed all traces of public accountability. The gutting of content moderation infrastructure removed operational resistance. The political alliance between the administration and the tech sector removed institutional resistance. And now a formal diplomatic cable removes the last pretense of arms-length separation between U.S. government messaging objectives and the platforms that carry them.
The legal questions that Murthy left unresolved—about when government pressure on private platforms crosses the constitutional line—will almost certainly be relitigated in this new context. But the more immediate reality is that the internet Americans and global audiences navigate is increasingly shaped not merely by the preferences of platform owners and advertisers, but by the strategic communication objectives of the U.S. government, implemented through platforms that have every financial and regulatory reason to cooperate. The question is no longer whether the government can use social media as a tool of statecraft. It already is. The question now is whether any institution—legal, normative, or structural—retains the capacity to check it.
