The Promises and Perils of Emerging Technologies for Cybersecurity

Herb Lin
Monday, May 1, 2017, 8:50 AM

In late March 2017, I was invited to submit for the record my views on “the Promises and Perils of Emerging Technologies for Cybersecurity" before the Senate Committee on Commerce, Science, and Transportation. What follows below is what I submitted for the hearing record held on March 22, slightly modified to include some references. I invite comment from Lawfare readers.

Published by The Lawfare Institute
in Cooperation With
Brookings

In late March 2017, I was invited to submit for the record my views on “the Promises and Perils of Emerging Technologies for Cybersecurity" before the Senate Committee on Commerce, Science, and Transportation. What follows below is what I submitted for the hearing record held on March 22, slightly modified to include some references. I invite comment from Lawfare readers.

The hearing was intended to explore the impact of emerging technologies, including artificial intelligence, the internet of things, blockchain, and quantum computing, on the future of cybersecurity and to launch a discussion about how such technologies create new cyber vulnerabilities but also innovative opportunities to combat cyber threats more effectively.

On the cybersecurity impacts of the technologies listed explicitly in the hearing announcement:

  • Artificial intelligence. AI may have substantial value in recognizing patterns of system behavior and activity that could indicate imminent or ongoing hostile cyber activity. Many hostile activities are discovered long after the initial penetrations have occurred, and earlier detection of these activities could reduce the damage that they do. It may also be possible to apply AI techniques across multiple systems to detect hostile cyber activities on a large scale to recognize, for example, a coordinated cyberattack on the nation as a whole; this is a substantially harder problem to solve than that of detecting a cyberattack on a single system.

A new kind of AI is known as “explainable AI.” Today, most AI-based systems are unable to explain to their human users why they reach the conclusions they reach or demonstrate the behavior they demonstrate. At least at first, users must simply trust that the system is behaving properly; over time, their trust grows if the system repeatedly behaves properly. But an AI-based system that can explain its reasoning is more easily trusted by its human users. Thus, an AI-based system could explain to its users why it is behaving in a manner that is inconsistent with its expected behavior, and such an explanation might well point to an adversary’s hostile activities as the cause. Today, DARPA has research programs underway to develop explainable AI.

AI may also be of substantial value in improving the productivity of cybersecurity workers and thereby mitigating the shortages of such workers anticipated for at least the next decade. Although AI-based systems are unlikely to replace cybersecurity workers entirely, they will surely be able to handle much of the relatively routine work that most cybersecurity workers have to do today—freeing human workers to do what the AI-based systems cannot do. In his testimony to the Senate Commerce Committee, Caleb Barlow referred to AI helpers for cybersecurity workers as cognitive security assistance.

  • The internet of things (IOT). IOT generally refers to the inclusion of computational capabilities into physical devices and the connection of these devices to the Internet. When IOT is not a marketing ploy (which it often is), it embodies the idea that IOT devices will operate more efficiently and effectively if they can obtain and react to information gleaned from their physical environment.

On the other hand, the number of IOT devices is expected to reach 50 billion within a decade (compared to a few billion today). And many if not most of these devices are likely to be much less secure than today’s computers (which are themselves hardly exemplars of good security). The likely reduced security of IOT devices is the result of technical and market factors. Technically and in the interests of cost reduction, such devices may well be equipped with only enough computational capability to do their job of increasing efficiency--and not enough to attend to security as well. From a market perspective, first movers tend to profit more than latecomers, and attention to security is counterproductive from the standpoint of reducing time-to-market.

So what are the security consequences of an additional 45 billion computational nodes on the Internet, many and perhaps most of which are easily compromised? Today, powerful botnet-driven denial-of-service attacks involve hundreds of thousands of machines, and such attacks can prevent even well-protected institutions from serving their users. But botnet attacks of the future may involve millions or even tens of millions of compromised machines. This does not bode well.

Furthermore, many IOT devices can effect changes in their environments. For example, they may raise the temperature in a device, activate a motor, or turn on an electrical current. If done at a time chosen by a malicious party, a piece of bread in an IOT toaster could catch fire, an IOT car could go out of control, or an IOT-connected electrical motor could be burned out.

  • Blockchain. Blockchain is essentially a decentralized database that keeps digital records of transactions that are accessible to any authorized user of the database. A record added to the blockchain is cryptographically tied to previous records, and thus a dishonest authorized user who tries to change a record must also change all previous records in the blockchain. The difficulty of making such changes increases as more records are added. And because the records are distributed among a large number of systems and viewable by any authorized user, hacking that involve compromises of intermediaries that centrally manage database records can be eliminated.

But blockchain technology does not eliminate the possibility of database fraud against users. A simple example is that newer blockchains (i.e., those with fewer records) are more vulnerable to hacking than older ones (i.e., those with many records). Thus, one kind of fraud could be to trick or persuade naïve users to use new blockchains, taking advantage of the reputation of blockchain as being a highly secure technology.

  • Quantum computing. The primary security issue associated with quantum computing is that the most commonly used algorithm used to ensure the security of transactions over the internet (i.e., between two parties that have not previously communicated with each other) would be rendered ineffective for most practical purposes with the widespread availability of quantum computing. Algorithms that can resist quantum computing are known but are more costly to implement. Furthermore, it takes time to replace the current quantum-vulnerable infrastructure with one that is quantum-resistant, a point suggesting the danger of waiting too long to take action before quantum computing becomes known to be feasible.

Going beyond the technologies explicitly mentioned in the hearing introduction, a number of other technologies may have significant impact. Some notable technologies in this category are described below, but these are by no means the only emerging technologies that belong in this category.

  • Formal verification of programs. Formal verification of programs is a process through which a mathematical proof can be generated that a program does what its specifications say it should do, and does not do anything that is not contained in the specifications. Although program specifications can be wrong, ensuring that programs conform to specifications would be a major step forward in eliminating many cybersecurity vulnerabilities. DARPA has supported some remarkable work in this are under the auspices of its program for High-Assurance Cyber Military Systems, though of course there is no reason that the methodologies developed in this program are necessarily applicable only to military systems. Today, it is possible to formally verify programs of some tens of thousands of lines of code—remarkable in light of the fact that several years ago, formal verification was only possible for programs less than one-tenth that size. On the other hand, programs today run into the millions and tens of millions of lines of code, a point suggesting that formal verification alone will not be a solution for many real-world problems.
  • New computer architectures. Most of today’s computing infrastructure is based on a computer architecture proposed by von Neumann in 1945. Although this architecture has demonstrated incredible practical utility, it does come with a number of inherent security flaws. One of the most significant security issues is that the memory of a von Neumann machine contains both the instructions that direct the computations of the machine and the data on which these instructions operate. As a result, data can be executed as though it were part of a program. And since data is introduced into the computer by a user, the user—who may be hostile—may have some ability to alter the program running on the computer. Some new computer architectures effectively separate data and instructions to eliminate this kind of problem.
  • Disposable computing. Disposable computing is based on the idea that if an adversary compromises a computing environment that the user can throw away without ill effect, the compromise has no practical impact on the user. (An introduction to this idea can be found here.) Today’s processors are powerful enough to run a disposable environment and a “safe” environment simultaneously. The major problem with such an approach is that passing data from the disposable computing environment to the “safe” environment provides a potential path through which compromises of the safe environment can occur. Relatively safe and controlled methods of data exchange can be used to pass data, thus reducing the likelihood of compromise but also increasing the inconvenience of data passage. Some commercial products are in this space, but they have not been deployed widely.

Lastly, a numbers of urgent needs for improved cybersecurity are less obvious and poorly understood. Again, this discussion is not meant to exhaustive; an area not being on this list is not an indication that it is unimportant.

  • Less expensive ways of writing secure software. Today, the cost of writing highly secure software is one or two orders of magnitude higher than writing ordinary code. It is natural that writing highly secure software would entail some additional expense, but when the additional expenses are so much higher, the disincentives for employing known techniques for secure software development are virtually impossible to overcome.
  • Usable security. Today, security measures often call for the end user to make decisions about security. Security measures always get in the way of users (no one enters passwords into a computer system for the sheer joy of doing it). Thus, users usually make decisions that are convenient for them (e.g., they choose easy-to-remember passwords) but that also compromise security (easy-to-remember passwords are more easily guessed by an adversary). Security architectures that reduce the number of such decisions are more likely to be successful than those that do not. The downside of such architectures is that they may be less flexible under many circumstances, and finding the appropriate balance between allowing and not allowing users to make personal security decisions is hard.
  • Business models for monetizing information exchange. Despite the best efforts of government and private entities, the problem of exchanging information related to cybersecurity remains unsolved. In essence, the issue is that everyone wants to receive information but no one wants to disclose it—and the upside of receiving information is outweighed by the risks associated with disclosure. Developing business models for monetizing information exchange—paying parties to disclose information—may well increase the benefits of disclosure and promote additional and much needed information exchange.

The three needs described above are not, strictly speaking, emerging technologies in the usual sense of a new electronic gadget or a new algorithm. But advancements in these areas (and many other areas) could well be described as security-relevant innovations. The reason is that better cybersecurity is not only a technological problem, and making progress on cybersecurity calls for an array of innovative ideas grounded in disciplines such as economics, psychology, organizational theory, and law and policy as well as technology. These points underscore the importance of an expansive definition of emerging technologies as being relevant to better cybersecurity rather than a narrow one.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare