When Misinformation Means the Difference Between Life and Death
A Review of Daniel Silverman, “Seeing Is Disbelieving: Why People Believe Misinformation in War, and When They Know Better” (Cambridge University Press, 2024).
Disinformation has become big business over the past decade—especially since 2016’s controversies about Russian interference in the U.S. presidential election and the role of disinformation in the United Kingdom’s Brexit referendum. Since then, government agencies, communications companies, think tanks, open-source intelligence collectives, and academic researchers have converged on the issue, which is frequently framed as an existential threat to national security, liberal democracy, and social cohesion. The coronavirus pandemic then sent concern about disinformation and misinformation into overdrive. The pandemic saw people bombarded with false or misleading information, driven by fear about the virus and uncertainty about its cause, its symptoms, and what the right response to it should be.
With so much written on disinformation and misinformation over the past decade, original research on these topics has become rare. But Daniel Silverman’s “Seeing Is Disbelieving: Why People Believe Misinformation in War, and When They Know Better” is genuinely novel.
The book examines why people believe or reject disinformation and misinformation in war. Its argument is both original and common-sensical: that people close to the fighting are more likely to seek out and believe accurate information, but those farther away, either within the theater of war or watching the conflict from afar, are more susceptible to inaccurate claims. There are two main reasons for this. First, those closest to the fighting can often see what is really going on. Second, they have especially strong incentives to gain accurate information as they have more “skin in the game.” Their physical safety and that of their relatives, homes, or businesses may depend on securing accurate information about the conflict. Those at a distance lack these urgent incentives and, especially when embedded in partisan communication networks, are more likely to be deceived by false information about the war.
A note on terminology. Silverman prefers “misinformation,” which he describes as the most “inclusive” term for false and misleading information spread to deceive people. A footnote acknowledges the important distinction between disinformation, which is spread deliberately, and misinformation, which is spread inadvertently. The distinction, however, matters less to Silverman’s book because it concerns whether people believe dis- or misinformation, not the intent of whoever is spreading it. The book muddies the waters by using the term “factual misinformation,” which reads like an oxymoron, even if unintentionally. But ultimately, the book concerns whether people’s beliefs are false or unsubstantiated, not the motives of those who may have deceived them.
Silverman makes his case through mixed-methods analysis of surveys, interviews, and military strike data from the U.S. drone war against the Taliban in Pakistan, the war against the Islamic State in Iraq, and the Syrian civil war. He shows how in Pakistan and Iraq falsehoods about the conflicts were endemic, but closer proximity to military action made people less likely to believe them and more likely to seek out accurate information. Pakistani civilians in drone strike regions believed accurately that U.S. strikes largely discriminated between civilians and combatants. Pakistanis elsewhere in the country and in the diaspora, by contrast, widely believed the U.S. was targeting civilians indiscriminately.
Providing detailed evidence about public opinion from hard-to-reach conflict zones is a key contribution of “Seeing Is Disbelieving,” though collecting it requires pragmatism in getting the most out of whatever data is available. The Pakistan and Iraq cases involve correlating surveys and data on when and where military strikes took place. The Syria case relies instead on interviews in which citizens report how good they think they are in discerning true and false information, which they may over- or underestimate. Still, each chapter accrues evidence to strengthen Silverman’s argument. Indeed, the book is a testament to the value of fieldwork and collaboration between academics across the world. Silverman’s data draws from local researchers in each conflict, giving him access few researchers tend to get. The book is richer, and its argument more convincing, as a result.
The book’s focus on the micro dynamics of why people believe disinformation and misinformation in conflict zones is surprisingly novel given the long-recognized aphorism that truth is the first casualty in war. Much research—and anxiety about—disinformation focuses on how political actors, such as Russian state propagandists or Donald Trump, spread it from the top down. Other research examines how social media spreads it horizontally. There is research on why individual variables, such as age, education, or partisanship, shape whether people believe disinformation or misinformation. But, as Silverman points out, few studies examine how being in different situations might affect belief in false information. In life-or-death situations—where knowing who is shooting at you and from where really matters—accurate information becomes especially valuable.
This insight is not entirely novel. Indeed, governments have long tried to restrict access to conflict zones in part because people there learn what the reality of war is like. This is an important reason why belligerents censor information from soldiers and journalists, and why internet blackouts are becoming increasingly popular during conflicts. Doing these things enables political actors to deceive their citizens more easily—because they want to either maintain domestic morale or hide failures or abuses from watching audiences.
What “Seeing Is Disbelieving” does is provide more nuanced evidence of how varied people’s susceptibility to false information is in war zones. Fascinatingly, it shows how refugees who have fled the immediate fighting may be just as susceptible to disinformation once they lose access to what is really happening on the ground. Diasporas are especially liable to be deceived as they are typically partisan about the conflict but distant from it.
"Seeing Is Disbelieving” is an academic monograph in structure, so fully appreciating its nuances requires a solid grounding in a range of quantitative and qualitative methods. But it is written well enough to appeal to industry professionals, students, and general audiences interested in why people believe disinformation and misinformation. Silverman’s prose is clear, unpretentious, and engaging, making the book’s concise 153 pages easy to read. Silverman’s work combines the methodological detail of a doctoral dissertation with useful attempts to make its arguments accessible to a general audience, drawing on well-known examples from World War II, and even a Harry Potter movie at one point, to show how intuitive its arguments are.
The book has the potential to have a far greater impact than most academic monographs, many of which stress their wider applicability but sometimes overclaim in doing so. As a micro-level study of how civilians experience conflict, it could have an impact similar to Stathis Kalyvas’s “The Logic of Violence in Civil War,” which upended prevailing thinking on why violence happens in civil war by suggesting that at the community level it may be driven in part by rational calculation rather than “ancient hatreds.”
Similarly, “Seeing Is Disbelieving” has the potential to reshape thinking about several aspects of war and conflict. First, it gives us a deeper understanding of how populations in war zones think. Second, it can help participants work out which populations’ beliefs they should prioritize in their communication efforts. Silverman implies that we should worry less about whether Ukrainians in the theater of war believe Russian disinformation, and more about whether Russia-sympathetic populations farther away believe it. By the same logic, his findings explain why Russia finds it easier to deceive the distant populations of Moscow and St. Petersburg about the war than civilians nearer the front. Future work studying these dynamics across a broader range of conflicts would be invaluable.
Third, as Silverman points out, his findings have implications for how counterinsurgency is practised. The difficulty of conducting counterinsurgency in distant lands has long been recognized, but “Seeing Is Disbelieving” helps explain why. Based on its argument, if population-centric counterinsurgency is about winning the support of the local population in the theater of war, this should be easier the closer to the fighting civilians are located. The greater challenge is to persuade those removed from the conflict—even within the country—who are more easily deceived, particularly if they are submerged in a partisan media environment. This has implications for which population groups should be prioritized in state communication efforts. But it also has implications for where insurgents recruit, if one thinks of the success the Islamic State had recruiting distant followers to join the Caliphate, partly because they could not see the reality of life there.
Silverman’s argument about how proximity to events shapes susceptibility to disinformation and misinformation is applicable far beyond the battlefield. He concludes the book with some preliminary thinking about pandemics and climate change, which, like war, cause death and destruction. One might assume that exposure to the Four Horsemen of the Apocalypse—death, war, famine, and pestilence—might make people more likely to seek accurate information about them. If your village is submerged by rising sea levels, your house is destroyed by wildfires, or your relatives are struck down with long COVID, you might be more likely to seek accurate information about these phenomena.
But, interestingly, these other fields do not appear to exhibit the same dynamics as war. Many people still doubt the science of climate change and COVID-19, despite being directly exposed to their effects. The reason, Silverman speculates, is that we may experience the effects of extreme weather directly, but we cannot directly perceive the cause of it. We cannot see what a vaccine does, and since we cannot prove a negative, we cannot prove that it has stopped us from getting a disease. As we cannot perceive these causal effects directly, we may be more vulnerable to false or misleading information about them.
A cynic might suggest, based on this logic, that we should send people to war zones or hospital wards so that when they experience reality close-up they will snap out of false consciousness and seek accurate information. This is hardly a policy solution to mitigating disinformation and misinformation. But, as Silverman suggests, getting more accurate information out from those in the theater of war is a useful step. How much impact this has remains to be seen, as people already experience conflicts remotely through their screens. And, of course, reporting what is really happening at the front is what good journalism already tries to do—and what belligerents often try to prevent, such as Israel’s blocking of independent journalists’ access to Gaza during the ongoing war. Silverman rightly asserts that efforts to get the truth out should be supported more strongly.
One of the positive developments in international politics this century is a far greater focus on the cognitive and communication dimensions of international conflict. There is often a degree of amnesia in this process, as there is a tendency to ignore lessons from older concepts such as propaganda and instead to fixate on supposedly new concepts. Even disinformation seems dated now. European governments are more concerned about FIMI (foreign information manipulation and interference), while Western military organizations such as NATO are moving on from Russian “hybrid warfare” to worrying about Chinese “cognitive warfare.” None of these is novel if one studies the history of communication and propaganda in war.
Silverman’s book reinforces the value of this focus on communication but also provides a welcome corrective. Most people experience war through screens and social media feeds, but war remains fundamentally physical. It is about killing people and destroying things. It is not just about whose story wins. There is something emphatic and undeniable about a tank rolling through your village, a shell landing in the town square, a bullet flying past your ear, that concentrates the mind in the way that few other things can.
The interesting question, and one Silverman’s research invites us all to think about, is what other issues his insights about what makes people believe disinformation and misinformation might apply to. The stakes are high because if we can understand better what motivates people to seek out accurate information, we could mitigate the harm disinformation and misinformation appear to be causing to democracy, social cohesion, and the world’s ability to solve the collective challenges we face. “Seeing Is Disbelieving” has provided an invaluable step in that direction, and its thought-provoking findings deserve to be read widely.
Thomas Colley is a senior visiting research fellow in war studies at King’s College London. His book, “Dictating Reality: The Global Battle to Control the News,” co-authored with Martin Moore, was published by Columbia University Press in October 2025.
