How to Measure Cybersecurity

Robert S. Taylor
Monday, August 26, 2019, 11:41 AM

Paul Rosenzweig observed recently on Lawfare that there are “no universally recognized, generally accepted metrics by which to measure and describe cybersecurity improvements” and that, as a result, decision-makers “are left to make choices about cybersecurity implementation based on qualitative measures rather than quantitative ones.” Rosenzweig is working with the R Street Institute to build a consensus on useful metrics.

Amazon Web Services presenter at the Department of Housing and Urban Development's 2018 Cybersecurity Awareness Day.

Published by The Lawfare Institute
in Cooperation With
Brookings

Paul Rosenzweig observed recently on Lawfare that there are “no universally recognized, generally accepted metrics by which to measure and describe cybersecurity improvements” and that, as a result, decision-makers “are left to make choices about cybersecurity implementation based on qualitative measures rather than quantitative ones.” Rosenzweig is working with the R Street Institute to build a consensus on useful metrics.

By raising the question of what tools those with the responsibility to make an organization’s cybersecurity investment decisions should use, Rosenzweig has already made a significant contribution. But his search for quantitative metrics and dismissal of qualitative metrics ignores the dynamic nature of the challenge of ensuring cybersecurity, as well as the critical role of processes and procedures. Cybersecurity is a matter not just of the equipment and tools in place but also of how the equipment and tools are used by people, and how the organization ensures that the equipment and tools and methods of use are kept up to date. Qualitative measures that are discernible and reproducible are and will continue to be essential in helping to guide sound investment and operational decisions.

There appears to be a huge societal underinvestment in cybersecurity. If the report of the Council of Economic Advisers (CEA) on “The Cost of Malicious Cyber Activity to the U.S. Economy” (February 2018) is to be believed, the cost that malicious cyber activity imposed on the U.S. economy in 2016 alone ranges from $57 billion to a staggering $109 billion. According to Gartner, firms worldwide spent $81.6 billion on information security in that same year. The comparison between the costs of malicious cyber activity on the U.S. economy and the amount of money spent worldwide on cybersecurity does not tell very much—it’s unknown, for example, how much was spent on cybersecurity in the United States alone; it’s unknown what the costs to the U.S. economy would have been if the amount spent on cybersecurity had not been spent; it’s unknown what the additional costs might have been for the cybersecurity measures that would have eliminated the $57 billion to $109 billion in costs to the U.S. economy (if elimination of all costs would even be possible); and it’s unknown whether the costs of measures necessary to reduce the costs of malicious cyber activity are asymptotic. That is, do the costs of eliminating risk approach infinity as the remaining costs of malicious cyber activity approach zero—and if so, where is the crossover point between cost-effective and money-wasting expenditures on cyber security?

Still, the available information shows that many organizations are simply spending too little. They are not deploying even low-cost measures that could substantially reduce the incidence of malicious cyber activity, and they are failing to keep their defenses up to date. Malicious cyber actors have learned that small and medium-sized entities constitute the soft underbelly of the United States’s cyber infrastructure, and they also know that through this soft underbelly it is possible to impose substantial costs throughout the economy.

Because of that interdependence, there can be a significant mismatch between where defenses are excessively weak and where the costs of malicious cyber activity occur. Where the benefits of an investment would primarily benefit someone other than the entity bearing the costs of the investment, those would-be benefits are likely to be ignored in the decision of whether to make the investment. Having better tools to assess the costs and benefits of cyber defenses will not necessarily address this potential mismatch. Indeed, as the CEA report observes,

Cybersecurity is a common good; lax cybersecurity imposes negative externalities on other economic entities and on private citizens. Failure to account for these negative externalities results in underinvestment in cybersecurity by the private sector relative to the socially optimal level of investment.

Further, cyber theft could impose significant adverse effects on national security. The theft of data from privately controlled networks relating to the F-35 fighter aircraft is an example of an adverse national security impact. In the judgment of Defense Department Undersecretary Frank Kendall, as documented in the CEA report, that breach could “give away a substantial advantage” and “reduce the costs and lead time of our adversaries to doing their own designs” based on the F-35 data. It is not at all clear whether such adverse national security impacts are counted in the financial harm done to the U.S. economy and, if so, how.

Additionally, a measure of the costs imposed on any individual company or on the economy during any previous time period does not fully capture the potential costs against which protection is needed. Things can get worse, and in the absence of better protection they undoubtedly will. There are likely nation-states, such as China and especially Russia, that are capable of imposing far more damage than they have to date, but they have chosen not to—for now. For example, it has been widely reported that Russia has inserted malicious software throughout the U.S. electric network, but Moscow has not yet tried to pull the trigger to impose maximum damage.

It is essential to defend against capabilities and not just against the kinds of malicious actions that have taken place in the past. Prudence dictates that efforts be made to defend against future capabilities and to attempt to prevent even more potentially damaging capabilities from maturing.

Rosenzweig writes that the R Street Institute has received a range of responses so far on what sort of cybersecurity assessment tools are needed. He characterizes the responses as falling into one of three buckets: (1) the “we’ve got this handled” bucket—that is, there is a group of backbone service providers and other major actors on the network that believe they have good internal measures of cybersecurity metrics adequate to the task; (2) the bucket that believes the quest for a quantitative metric is “a fool’s errand”—that is, mostly academics but also some practical cybersecurity professionals believe that the challenge of cybersecurity is too dynamic, making any quantitative measure of cybersecurity impossible; and (3) the great middle bucket—that is, those who think that the quest for a useful metric is important, and that such a metric would be enormously helpful, but who have no clear idea of what such a metric would look like.

To the extent that a quantitative metric looked at such indicators as the number of attempted intrusions, and the success of those attempted intrusions in exfiltrating or altering data, I believe such a metric would be of limited utility. The results would be strongly influenced by purely backward-looking information, such as the number of attempted intrusions, and would say nothing about the future or about the strength of existing defenses against the capabilities of potential future actors. Two equally well-defended or poorly defended entities could have vastly different quantitative scores under such a metric, which would therefore provide no useful guidance on the justification for further investment.

Despite my skepticism of the utility of the sort of quantitative approach Rosenzweig hints at, I am not squarely in the second bucket of responders—at least not yet. In my view, it might be possible to develop a testing range that would allow a given company’s network to be modeled and then subjected to various attacks with different defensive schemes or equipment deployed. Such an approach could provide extremely useful quantitative information, but it is likely to be extremely costly and slow, and not fully reflect how the company’s processes and procedures affect the level of cybersecurity achieved.

Another plausible approach is to work with white-hat hackers to probe for vulnerabilities. The number and redressability of the flaws found would give a quantitative indication of the remaining risk and, presumably, result in discrete improvements as part of the process, as those vulnerabilities that could be fixed by the hackers would be. While quantitative information would be generated, and the exercise itself would yield real improvements in overall cybersecurity, I am not sure that this would provide information valuable for comparing risks or identifying tools for stopping future risks from emerging.

I urge the R Street Institute to be open to a qualitative metric that focuses more on determining the presence or absence of controls and procedures, rather than on quantitative scores of the level of cybersecurity achieved. This qualitative approach could be built on the work of the National Institute of Standards and Technology (NIST) reflected in its Cyber Framework and its Publication 800-171. The NIST Cyber Framework consists of standards, guidelines and best practices to manage cybersecurity-related risk, and Publication 800-171 is a more detailed document setting out how private entities can better protect sensitive but unclassified national defense information.

In addition, the R Street Institute should certainly look carefully at the Cyber Maturity Model certification program under development by the Department of Defense and other agencies as an approach that might well be worth emulating, even when not required. That program would establish requirements for certain controls and processes to be in place, with increasingly stringent requirements for each of the various security levels to which a company could be certified, ranging from the most basic to the most sophisticated. To be eligible to compete for contracts with the Defense Department, or to be in the supply chain for such contracts, the company would be required to achieve certification at the level specified in the request for proposal. The certification at a given level, to be made by an independent entity and not by the subject company itself, would be an easily understandable measure of how good the company’s cybersecurity is.

Translating that measure of quality into information relevant to a cost-benefit analysis of additional investments in cybersecurity would remain a challenge. But if the information on the certification level were to become widely available, insurers would be able to tailor cyber insurance premiums that better reflect the quality of a company’s defenses, and other companies would have useful information for deciding with whom to do business. The demands of third parties—insurers and other companies—and the kind of certification program that the Defense Department is working on now could result in an imperfect but still helpful tool to determine whether additional cybersecurity investments would be cost-effective.


Robert S. Taylor is the general counsel of MCE Social Capital, which raises capital for micro finance and similar entities in approximately 40 developing countries around the world. Previously, he was in private practice, visiting scholar at Harvard Law School, and for eight years the principal deputy general counsel of the Department of Defense, including almost two years as acting general counsel. He can be contacted at R_Taylor@comcast.net.

Subscribe to Lawfare