Published by The Lawfare Institute
in Cooperation With
Editor's Note: Data should drive decision-making – the real question is how much should it do so? As big data and data analytics expand, it is tempting to assume they can solve many of the problems foreign policy decision-making has long faced. Chris Meserole, a pre-doctoral fellow here at Brookings unpacks some of the issues involved with big data when it comes to foreign policy and argues that it can inform our strategic reasoning but can’t supplant it.
We live in the era of big data and data analytics – and, increasingly, “data-driven decision-making.”
Yet, when it comes to national security, what would it mean for policy decisions to be data-driven? For the national security policy-maker, what can data and data analytics actually offer?
I’m not referring here to the use of data in implementing policy. When the Pentagon uses data analytics to cut procurement costs or when intelligence agencies use predictive analytics to identify potential targets, each is relying on data analytics to better execute policy.
By contrast, my concern is with using data to make policy. What does data-driven policy-making look like when it comes to national security?
To answer that, we need to walk through how we decide between competing policies in the first place. Very often, we reduce policy choice to a kind of shorthand. For example, we’ll often say something like, “We should intervene in Syria” or “I’m against the Iran deal.” Yet such catchphrases obscure a more complex thought process. Any time we advocate for one policy over another, what we’re really saying is, “a world in which we do X is more likely to be a better world than one in which we do Y.”
Every policy choice thus involves two sets of intuitions. The first set concerns how likely a given policy is to lead to a range of possible outcomes. The second concerns the value we assign each of those outcomes. Imagine if we were contemplating regime change. One set of intuitions would concern how likely we thought regime change would be to lead to a power vacuum, or to a dictator, or to a stable democracy. The other set would comprise value judgments about how much better or worse each of those outcomes would be compared to the status quo.
What does data-driven policy-making look like when it comes to national security?
Ideally, policy-making should involve careful deliberation about both sets of intuitions. Yet, in reality, we tend to focus much more on the value side. Sometimes that focus is deliberate: it’s easier to win a policy argument by assuming away any uncertainty about whether our policy will work and shifting the debate instead to a purely strategic or moral domain. But often it’s not deliberate at all. In fact, the strength of our convictions can bias our sense of how likely a policy is to work. When we believe deeply that a specific policy is the right policy, we can all too easily trick ourselves into thinking that it will inevitably work as intended.
Yet no matter how much we may try to frame policy debates in terms of values alone, probabilities are always at play. And that is where data can play a role: data analysis can remove many of the biases we may hold, consciously or not, about what the effect of a policy is likely to be.
Consider the debate over drone strikes. For the sake of simplicity, let’s focus on just two aspects of that debate: the potential gain of reducing terrorist operations and the potential cost of civilian casualties. If we limit the debate to those factors, then whether we are for or against drone strikes will depend largely on how likely we think they are to disrupt terrorist groups and how likely to produce civilian deaths.
At issue is how to estimate each of those likelihoods. One option is to rely on gut instinct — which is to say, to rely on the patterns we subconsciously pick up on as we read about the effect of drone strikes in the news, discuss them with colleagues, etc. Another option is to rely on careful counterfactual reasoning, such as rigorously selecting cases and analyzing them in-depth.
However, if we want to estimate the likely effect of drone strikes with any precision, then data analysis offers a better approach. For instance, in a paper published earlier this year, Patrick Johnston and Anoop Sarbahi looked at data on drone strikes and insurgent activity in Pakistan and showed that drone strikes may reduce terrorist violence by nearly 25% in the week following an attack. If we couple that estimate with corresponding data on civilian casualty rates, we can begin to make an informed judgment about whether the strategic value of drone strikes outweighs the moral cost of potential civilian casualties.
If we want to understand how likely a range of policy outcomes may be, we will almost always be on surer ground when we incorporate empirical evidence and analysis.
Of course, even rigorous data analysis is far from foolproof. The process of building datasets often contains its own biases and underlying ethical implications, and analyzing data typically demands a host of strong assumptions. Further, when researchers disagree about which data and assumptions to use, they can arrive at contradictory conclusions.
Yet the question isn’t whether data analysis is perfect, but whether it’s better at constructing likelihoods than the alternatives. Are we better off estimating the likely effect of a policy based solely on our subconscious perceptions and the unknown biases that inform them? Or are we better off estimating those likelihoods empirically, after taking known biases into account? Data analysis will often be the better option. If we want to understand how likely a range of policy outcomes may be, we will almost always be on surer ground when we incorporate empirical evidence and analysis.
Again though, data analysis can only inform our intuitions about likely outcomes. It cannot inform the value we attach to those outcomes. Even if we had a model that validated perfectly, what would we do if it said there was an 80% chance of regime-led mass atrocity in a country, but only a 20% chance of a stable democracy taking root if we intervened? Or what would we do if the numbers were reversed? Such questions afford no easy answers, much less objectively right ones. Instead, they demand subjective decisions, however fraught, about which strategic or moral interests we ought to value most.
The great promise of the data revolution is that it will enable us to estimate potential policy outcomes much more accurately. Yet that is only one dimension of policy-making. Even in the age of big data, age-old questions about strategic and moral value will remain as pressing as ever.