Foreign Relations & International Law Lawfare News

Setting the Standard for CVE

Evanna Hu
Sunday, February 17, 2019, 10:00 AM

Editor’s Note: Few people disagree with the goal of Countering Violent Extremism (CVE), but in practice the programs have faced many problems. A big one is that it is hard to know if they are working, as existing metrics do a poor job of measuring success and failure. Evanna Hu of Omelas proposes a set of fixes to CVE programs that would make them more rigorous and more effective.

Daniel Byman

***

Photo Credit: via Counter Extremism Project

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: Few people disagree with the goal of Countering Violent Extremism (CVE), but in practice the programs have faced many problems. A big one is that it is hard to know if they are working, as existing metrics do a poor job of measuring success and failure. Evanna Hu of Omelas proposes a set of fixes to CVE programs that would make them more rigorous and more effective.

Daniel Byman

***

There is no doubt that Countering Violent Extremism (CVE) is critical to U.S. national security interests. But CVE has always suffered from a major flaw: a lack of monitoring and evaluation (M&E) to show the effectiveness of the many programs in implementation. Without rigorous M&E, it has been hard to defend the CVE budget in front of Congress, and five years after the first CVE summit hosted by the White House under President Barack Obama, we still do not know with confidence which CVE programs work better than others in different contexts. Have the hundreds of millions of dollars spent on CVE programs actually had the desired effects? Have some programs generated unintentional backlash? And moving forward, how do we know which CVE programs are worth continuing and have the biggest bang for the buck? Setting industry standards for M&E can answer these questions, allowing policymakers to develop more effective programs and direct funding to what works.

Measuring Effectiveness, Building Responsiveness

A number of near-term fixes can increase the analytic rigor for evaluating CVE programs and improve their implementation. First, these programs need to have robust M&E methodologies to gauge their effectiveness. Although Mercy Corps has done rigorous quantitative analysis in Somalia and Afghanistan, waiting for one to two years after the completion of a program for results is simply not realistic to justify funding in short political cycles. The simplest way to integrate M&E strategies is to make sure they are put into place before the programs. While this sounds like common sense, in most cases M&E is an afterthought, done after the start or even the completion of the program. Donor offices and agencies should add more emphasis to M&E in bid proposals, forcing grantees to consider it more as well.

Second, the metrics used in M&E must measure the actual variables of interest. Again, this sounds like common sense, but current M&E sections in requests for proposals and subsequent reports do not specify metrics for attitude or behavioral change, the two goals of any CVE program. Currently, in-person programs document how many people attended a training, what they reported in post-program surveys and other indicators of the program’s reach. More sophisticated evaluations will have done a baseline knowledge survey before implementation and conducted a follow-up survey after, reporting the change in knowledge attainment. For online countermessaging campaigns, click-through and engagement rates are still the norm. This is a problem on many levels. In the Balkans, for example, those in the international development space roll their eyes and call these metrics “rent-a-crowd” numbers, because they identify the same people who show up for workshops, many of whom are there for the food and stipend. They are physically present, but they aren’t actually learning. Similarly, click-through and engagement rates are weak indicators. How many times have you clicked on an article online and not read it, only to share it later? Getting rid of these “rent-a-crowd” numbers and other questionable metrics are relatively effortless improvements that can be made immediately, without the impediment of institutional inertia.

It would be easy to improve the precision of baseline information by surveying a control group. When organizers send out informational packets to individuals who plan to attend a CVE workshop or event, they could also include a survey with knowledge attainment and attitude-related questions. In any event planning, it is standard that an average of 20 to 50 percent of those confirmed for an event will be no-shows. These no-shows automatically become a quasi-scientific control group. The experimental group then comprises those who showed up and attended the training. Having the two groups (control and treatment) allows us to eliminate external factors, such as a major religious holiday or a terrorist attack, that could be skewing the final results.

Another helpful development would be to ensure that the post-event survey is conducted at least a few weeks after program completion. The time lag would allow evaluators to make sure that the knowledge has truly struck, rather than being crammed in and forgotten a few days later.

For online programs, we can take a page from the private sector and use sentiment analysis to check whether or not the target audience is engaging less or more with terrorist propaganda (defined by sharing and “liking” on social media), rather than using more imprecise metrics such as click-through rates. Sentiment analysis identifies content with five basic human emotions (fear, anger, joy, sadness, and disgust) on a scale. If a CVE program is effective, we would expect the target audience to score lower on negative sentiments after the program, and lower than the control group that did not receive the training. This approach has limitations, especially with languages (such as Somali and Pashto) that lack natural language processing libraries, and looking at decreased engagement with terrorist propaganda ultimately runs into the same issue as looking solely at the click-through rate. Sentiment analysis can’t rule out that the user is just clicking without digesting the propaganda. However, it is still an improvement on the status quo.

The third way to improve the effectiveness of CVE programs is to better align interests on the ground with those of policymakers. When I was in Iraq in November, many civil society organizations and local nongovernmental organizations (NGOs) complained that there was an abundance of funding for things they didn’t really need or that were low on their communities’ priority lists, while the top-line items were ignored. For example, in the outskirts of Mosul, what community leaders need the most are housing and business reconstruction. Instead, the U.S. Agency for International Development provided reflective strips to be painted on trash cans.

This is the classic headquarter-field offices misalignment. Primary contractors and implementing partners on the project are useful for organizational purposes and for dodging political sensitivities, but they should not replace the fieldwork that donor representatives in-country are required to do.

We should build a system in which local communities, once accredited, can submit anonymous complaints—similar to the State Department’s internal dissent system—or suggestions without fearing retribution or that their funding will be taken away. This could be a text-messaging system or an online solution. Moreover, donor agencies based in cities such as Washington and London should be required to make at least one or two visits each year to the countries and regions in which they are funding projects in order to ensure they are knowledgeable about the issues they are trying to address, and donor representatives working in-country should be hosting community meetings once every month. Meeting participants should have the authority to implement or quickly adapt to changes in local priorities. The Department of Homeland Security (DHS) Office of Civil Rights and Civil Liberties has community roundtables every quarter in cities such as Minneapolis and Columbus, Ohio, at which community members gather with governmental officials. The problem is that, without enforcement power, the DHS program’s progress is incremental and slow.

Setting Standards

The above-mentioned suggestions are practical measures that could be implemented relatively easily with a big return. The longer-term objective is to establish a standard set of metrics for CVE programs after deciding what success should look like. Currently, there are no standard metrics. Even within a single U.S. governmental agency, such as the DHS or the State Department, each office has its own metrics, which may even vary within the office for different projects. This has caused confusion among practitioners and policymakers alike, especially when defending the CVE budget in front of Congress.

The standard metrics need to be measurable, meaningful and clear, similar to the Objectives and Key Results (OKRs) methodology used in high-growth technology companies. Setting up concrete metrics for different categories of CVE programs (i.e., the metrics for online countermessaging campaigns are different from those for in-person workshops) also forces the various offices and departments to collaborate. Interdepartmental collaboration has been embarrassingly sparse. For example, in high-priority countries such as Kenya, Tunisia, and Nigeria, interdepartmental coordination was so poor that the U.S. State Department Bureau of Counterterrorism and Countering Violent Extremism had to fund a multimillion-dollar program in 2016 just to find all the CVE programs sponsored by all donor agencies and various offices within the U.S. government.

Fixing the monitoring and evaluation of programs would be a big step forward in improving CVE. But it is only a component of the larger overhaul CVE requires to be relevant. Too often, CVE is seen as counterterrorism’s pesky, return-optional pet. We can elevate CVE to be seen as serious, rigorous and critical only if we agree on the end goal of CVE and have metrics along the way to demonstrate success.


Evanna Hu is the CEO of Omelas, an international security fellow at New America and a lecturer on CVE at NATO.

Subscribe to Lawfare