Armed Conflict Cybersecurity & Tech

The Hidden Risks of Platform Control Over Historical Memory

Alena Gribanova
Wednesday, November 26, 2025, 10:01 AM

Emergency powers in the EU’s Digital Services Act risk destroying important evidence for future courts and historians.

Global internet (Mohamed Hassan, https://pxhere.com/en/photo/1451419; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/).

Published by The Lawfare Institute
in Cooperation With
Brookings

In modern conflicts, the first casualty is not truth but visibility. Platforms decide which images of war survive and which disappear. The new Digital Services Act (DSA) in the European Union gives regulators unprecedented powers to shape this process. Information has long been a strategic asset in war. Contemporary conflicts have accelerated this trend to an unprecedented scale due to the vast volume of digital evidence. Control over digital narratives now shapes public perception of war as it unfolds. Digital platforms have become arenas where evidence of violence is quietly erased. Tech companies apply moderation policies that often mirror government pressure. They leave the public cut off from vital information during crises. This deprives future historians of key sources. Geopolitical tensions have exposed how quickly digital records can disappear from major platforms, casting doubt on the permanence of the digital historical record.

Military operations in late 2023 revealed this trend with brutal clarity. The armed conflict in Gaza during fall 2023 triggered a massive purge of digital evidence. Social media platforms systematically removed millions of records documenting violent events. These filtering systems don’t take direct orders from governments, but they implement moderation rules designed by tech companies under political pressure. Human Rights Watch recorded 1,050 cases of content restrictions or removals, or complete restrictions, on conflict-related content exclusively on Instagram and Facebook during the first 30 days of the conflict. Simultaneously, the Israeli administration submitted 9,500 requests for the removal of digital materials by leading tech companies. The Eye on Palestine news outlet, which has accumulated an audience of 6 million strong and serves as a main source of on-the-ground reporting directly on local events, was temporarily deactivated. As a result, its posts virtually stopped appearing in users’ feeds, dramatically reducing the number of people who could find updates about the conflict on Facebook.

The large-scale deletion of digital documents shows the complex issue of information management on digital platforms during geopolitical crises. Corporate actors, in reality, juggle several competing pressures—maintaining social stability and making money, cooperating with national authorities, and preserving digital evidence that could later support humanitarian law cases. Consequently, under the DSA, there are two distinct levels of government influence on platforms. The first, a crisis response mechanism under Article 36 that has not yet been implemented, empowers the European Commission, acting on a recommendation from the European Board for Digital Services, to declare a “crisis” and then require very large platforms to adapt their algorithms to include additional moderation systems. The second, a non-crisis feature, is Article 22, which establishes a “trusted flagger” regime, requiring platforms to prioritize content takedown notices from state-appointed organizations.

Although the DSA formally requires the commission to maintain a publicly accessible pan-European list of trusted flaggers, the implementation of this database is fragmented. This, in turn, makes it difficult for outside observers to determine who is actually enjoying this privileged status. This interaction between untested crisis response tools and the emerging trusted flaggers collectively reflect a broader risk, one that includes emergency management with delegated enforcement that would discreetly determine which evidence of conflict remains or disappears from the public record.

Three targeted reforms can help mitigate these new risks. Independent oversight of crisis announcements will limit the most far-reaching interference in platforms’ information flows. Similarly, stricter accountability for trusted flaggers will limit the ability of state actors to quietly seize information, and the escrow-based archiving requirement will ensure the preservation of documentation of international crimes even after they are removed from public view. 

Why Platforms Matter in Crises

Digital communication platforms have become critical repositories of visual evidence of modern armed conflicts, as shown by open-source investigations into chemical weapons attacks in Syria and mobile documentation of atrocities in the Democratic Republic of Congo. The most recent conflict in Gaza that began in October 2023 led to the creation of civilian visual content on an unprecedented scale in the history of the Israeli-Palestinian conflict. Available information has been organized and preserved by open-source archives and human rights groups, creating a body of documentary evidence about the conflict. Such collections of visual information go beyond traditional journalism. The legal relevance of such materials is revealed through their potential use in classifying acts as war crimes. International human rights bodies draw on these sources to document violations; future researchers gain a foundation of primary sources for historical reconstruction.

The Office of the Prosecutor of the International Criminal Court (ICC), along with specialized investigative mechanisms of the United Nations and nongovernmental monitoring bodies, regularly relies on digital artifacts of similar origin. At the same time, only a small fraction of the vast volume of digital material generated during conflicts ever meets the evidentiary standards of criminal courts. Turning online content into admissible evidence requires careful verification, authentication, and chain-of-custody safeguards, as practitioners such as SITU Research have emphasized in their efforts to bring digital evidence into the courtroom.

The DSA’s New Crisis Powers

In February 2024, the Digital Services Act officially came into force, giving Brussels new powers over the crisis management of online platforms. Once the European Commission declares a crisis, it may require companies to evaluate whether their services contribute to any emerging threat to public safety. The commission can then prescribe measures to mitigate the risk.

During a crisis, the commission may require algorithmic adjustments, increasing the visibility of certain materials, and reducing the reach of others. Legally, this power derives from Article 36(1)(b) of the DSA, which allows the commission, following the declaration of a crisis, to require very large online platforms, as well as search engines, to implement the measures to mitigate systemic risk set out in Article 35(1). Article 35(1)(d)–(e) explicitly refers to “testing and adapting their algorithmic systems, including recommendation systems,” and “adapting their advertising systems,” while Recital 91 clarifies that such crisis response measures may include adapting content moderation processes to include relevant algorithmic systems, as well as further strengthening cooperation with trusted flaggers.

The regulation’s language is notably broad; platforms may be required to adapt their content moderation processes, terms of service, algorithmic systems, and advertising mechanisms. In practice, this includes suppressing publications labeled as disinformation, increasing the reach of official government sources, and completely restructuring what reaches millions of users’ feeds. Specific steps are chosen by the companies themselves, although the imperative to act comes from the commission and is legally binding. The duration of such measures is limited to three months; extensions are permitted. Fundamental political conflicts underlie this technical procedure. The question of who can declare a crisis under Article 36 remains crucial. The definition in the DSA is deliberately vague; a crisis is defined as an emergency that creates a serious threat to public safety or health. Clear criteria are not spelled out, and decisions are essentially left to the discretion of the commission and the platforms. A mechanism designed as a means of protecting public safety creates the risk of political abuse; governments can declare a crisis to suppress inconvenient narratives under the pretext of combating disinformation. 

The Trusted Flagger Problem

The DSA establishes a two-tier system of content moderation through the mechanism of privileged notifiers. Article 22 of the regulation establishes a priority system for processing complaints for entities accredited by national DSA coordinators; such requests for removal of material are subject to expedited consideration and increased scrutiny. State institutions or nongovernmental entities that have demonstrated competence and autonomy in identifying illegal digital content are eligible to obtain this status. Platform operators demonstrate a rapid response to such requests, exceeding the speed with which they process standard user complaints. This priority applies across the European Union, encompassing platforms subject to the regulatory scope of the DSA. 

By February 2024, when the DSA began to apply to all intermediary service providers, the register of privileged notifiers curated by the European Commission remained almost empty. Potential applicants were reticent; the regulatory landscape was perceived as ambiguous—symptomatic of a tool designed for times of social turbulence. October 2025 brought no significant changes whatsoever. Skeptics emphasized the lack of explicit public criteria and the information vacuum regarding the identities of trusted flaggers—factors eroding the system’s legitimacy.

National bodies overseeing accreditation procedures could, in practice, gain powerful tools for covert censorship. Currently, this is primarily a potential risk under the DSA, as only a small number of designated whistleblowers have been appointed. Public data on the systematization of their notifications remains scarce. However, similar trends are observed outside the EU. It’s important to note that, for example, in Israel, the Ministry of Justice’s cyber unit has for years sent thousands of complaints to major platforms requesting the removal of Palestinian content, with a compliance rate of 80-95 percent , often justified by accusations of “extremism” or, in other cases, “incitement.” Platform operators, bound by the obligation to prioritize processing, eliminate thousands of posts—including legitimate criticism and documentary evidence of violations. This regime conflicts with the Berkeley Protocol on Digital Open Source Investigations, developed by the University of California, Berkeley, at the initiative of the Office of the United Nations High Commissioner for Human Rights. The document outlines a comprehensive methodology for recording, forensic analysis, and forensic presentation of electronic artifacts for criminal, human rights, and humanitarian proceedings. The protocol’s requirements mandate the preservation of primary datasets with verified provenance chains—a fundamental requirement for accountability. The loss of such materials undermines the evidentiary basis for prosecuting offenders. 

Government Pressure on Platforms

Following the events of Oct. 7, 2023, Sada Social documented over 25,000 digital violations targeting Palestinian content across social media platforms in 2024. Publications about Kashmir disappeared from public access following the mass blocking of Pakistani accounts. These incidents show how moderation practices reproduce existing power structures during crises; the neutrality model is not entrenched.

As recent practice shows, government pressure on large technology companies continues to grow. This phenomenon is particularly evident in the work of compliance teams that monitor the functioning of these platforms. The transnational nature of this regulatory pressure is evident, as it extends to nearly 150 jurisdictions. For example, since 2020, South Korea has sent Google about 33,000 data removal requests (10 percent of the global total). India has submitted about 16,000 requests (5 percent). Russia has sent more than 211,000 requests (64 percent), according to an analysis of Google’s Transparency Report by Surfshark. Beyond these overall figures, similar forms of pressure are particularly evident in conflict situations, where governments work closely with platforms to remove content disseminated by occupied populations, as shown by recent data on Israel’s campaigns to remove Palestinian posts on social media.

Algorithmic Control of Memory

Studies in Science and Nature show that algorithmic changes significantly alter what users see and how they behave, though beliefs remain stable over a three-month period. Under pressure from extraordinary circumstances, corporations reconfigure recommendation filters, limiting the visibility of individual publications.

The shift from direct deletion to algorithmic filtering completes the picture of digital censorship. Downranking graphic images of military action or removing publications critical of official positions excludes such content from public discourse; technical deletion is not even required. Soft censorship emerges through visibility management: The post is retained in the database, its reach is artificially reduced, and collective memory loses this evidence. Invisible ranking matrices restructure visibility, redirect user trajectories, and create collective amnesia. Algorithmic censorship is not a conspiracy theory; it’s about platform governance based on data, with visibility decisions made by an automated system based on a multitude of opaque factors.

For the user, the effect is tantamount to deletion: The published material or the intended audience reach becomes virtually inaccessible. Regulatory instruments were created with the idea of balance, but in practice, their effectiveness is fragmented. The DSA mandates the publication of detailed reports on moderation decisions, external inquiries, and ongoing risk assessments, which form the basis for public accountability and the recognition of selective pressure. Platforms withhold critical details: how algorithms operate, visibility reduction criteria, and data on content distribution during crises. Transparency reports focus on direct removal, while the more widespread practice of algorithmic suppression remains in the shadows. DSA disclosure requirements lack specific metrics, leaving platforms with wide discretion. This opacity is especially problematic during crises, when suppressed documentary evidence can leave war crimes uninvestigated.

Three Policy Solutions 

How can regulators ensure accountability without making platforms subject to state control? Three targeted reforms can address this problem more clearly: independent multilateral oversight of crisis declarations under Article 36; stricter accountability requirements for trusted flaggers under Article 22; and an escrow-based archiving obligation for content that could serve as evidence of international crimes.

First and foremost, independent oversight of crisis declarations is needed. The independent oversight of crisis classification, as envisaged in a revised Article 36 of the DSA, requires a different institutional architecture than the current one. Instead of allowing the European Commission to independently determine the status of a “crisis,” it is proposed to delegate this decision to an autonomous expert group, with mandatory participation from civil society representatives and independent media outlets. By verifying the justification for any activation of the crisis response mechanism before the commission can require platforms to adjust their algorithms, such a group would significantly reduce the likelihood of political abuse. To ensure a prompt response, the group would be required to make decisions within 48 hours, with reviews of any crisis determination initiated every three months.

Second, regarding the need to increase the accountability of trusted flaggers, EU co-legislators should thoroughly review Article 22 of the DSA to prevent the “trusted flagger” regime from quietly evolving into a tool of state control over content. Digital services coordinators will be required to publish detailed justifications for each accreditation decision, including the applicant’s mandate. The status of each trusted flagger will be periodically reviewed every 12 months, with a mandatory assessment of the accuracy of their removal requests, after which information on their legitimacy will be published. Platforms, in turn, will publish aggregated statistics showing the percentage of requests submitted by each trusted flagger that were granted across various content categories.

Third, regulators, including the European Commission and national digital services coordinators, can introduce an escrow-style archiving obligation for evidentiary content. The highest-priority, substantive amendment to the DSA for the upcoming update builds on the Berkeley Protocol. Under this escrow archiving regime, materials removed for violating platform rules or at the request of a government, but containing images or descriptions of potential war crimes, crimes against humanity, or genocide, are automatically placed in cryptographically secure storage. Access is limited to accredited international tribunals, UN investigative mechanisms, the ICC, and national prosecutors prosecuting serious international crimes. Governments that initiated the removal are not granted such access. Authority to determine categories of content subject to mandatory preservation during periods of armed conflict would be vested in an independent UN mechanism, comparable in role to the IIIM (International, Impartial and Independent Mechanism) for Syria. Failure to comply with the requirements would be subject to significant fines under the DSA enforcement regime.

These reforms do not represent a radical departure from the DSA’s logic; they are merely minimal provisions to ensure that its emergency powers do not destroy the very evidence that future courts and historians need to establish a factual historical record and secure justice and accountability.


Alena Gribanova received an MA in World Politics and International Relations at the University of Pavia and a bachelor’s degree from Lomonosov Moscow State University. She focuses on the politics of social media, visibility, and truth.
}

Subscribe to Lawfare