Cybersecurity & Tech Foreign Relations & International Law

Volkswagen and the Real Insider Threat

Zachary K. Goldman, Ramesh Karri
Monday, November 2, 2015, 12:31 PM

Insider threats rightly occupy a significant portion of the public discussion (and private debate inside corporations and government agencies) about cybersecurity. The company employee who intentionally or inadvertently releases sensitive information can cause as much damage as the Russian organized crime group or the Chinese military unit that steals information for profit or to further national political objectives. Just ask former NSA Director Gen. Keith Alexander, who had to deal with the fallout from Edward Snowden’s unprecedented leaks of classified information.

Published by The Lawfare Institute
in Cooperation With
Brookings

Insider threats rightly occupy a significant portion of the public discussion (and private debate inside corporations and government agencies) about cybersecurity. The company employee who intentionally or inadvertently releases sensitive information can cause as much damage as the Russian organized crime group or the Chinese military unit that steals information for profit or to further national political objectives. Just ask former NSA Director Gen. Keith Alexander, who had to deal with the fallout from Edward Snowden’s unprecedented leaks of classified information.

But the recent news about Volkswagen’s manipulation of emissions test results reveals a different kind of insider threat; a threat that is far more pernicious because it results in damage not only to the people immediately affected by it, but also to the bonds of trust between society and the institutions upon which we rely for many aspects of our lives.

Over the last several weeks, reporting has revealed a coordinated insider effort at Volkswagen to insert a malicious piece of software—a defeat device—into the car’s electronic control module. The device was able to sense when emission tests were being conducted by monitoring things like “speed, engine operation, air pressure and even the position of the steering wheel,” and triggered changes to the car’s operations to reduce emissions during the testing process so that those cars would pass the tests. When the malicious software remained dormant, the emission controls were disabled and the cars spewed up to 40 times the EPA-mandated emissions limits. Through the defeat device, Volkswagen was able to sell more than half a million diesel-fueled cars in the U.S. in violation of U.S. environmental laws.

It is not just that VW lied. It is that VW deliberately manipulated the system designed to identify when car companies were telling the truth about their emissions.

Subversion of this kind undermines trust in two ways. First and most important, it undermines the trust we have in the companies we rely upon to provide the infrastructure of our every day lives. The revelation that Volkswagen was able to manipulate emissions tests in such a profound way for several years demonstrates that at least some regulated entities are acting to undermine society’s interests for their financial benefit. A case such as this therefore erodes confidence in regulated industries more broadly. How would we know, for example, that financial institutions subject to stress tests by the Federal Reserve are not manipulating the data on which regulators make determinations about the health of the financial system? And how would we know that drug or medical device companies are accurately reporting the results of trials or tests, instead of keeping two sets of electronic books—one for the regulators and one for themselves?

But, second, a case like this also undermines the trust we have in the ability of the state to police private entities and ensure that they are not subverting the public good to advance their own interests. Regulation exists in part to correct misalignments of incentives and to ensure that private companies, motivated to seek profits and rightly so, do not do so in a way that undermines other social goods. Thus, we mandate that publicly traded companies disclose certain kinds of information and entrust the Securities and Exchange Commission with enforcing those requirements. And we entrust the Food and Drug Administration to ensure that medical device companies only introduce products into the marketplace after they have been tested thoroughly, in accordance with widely-accepted standards. We certify home electronics as being energy-efficient (through the “Energy Star” process); and we stamp the Leadership in Energy & Environmental Design (LEED) certification on modern buildings. Can these certifications be trusted as valid? How would we know if they are being undermined?

It is in this way, by undermining the processes that oversee global corporations, that Volkswagen’s activities were profoundly short-sighted. For if we cannot trust the integrity of corporations, or the integrity of the regulatory process designed to police them, we will all—including the malfeasant corporations themselves—suffer.

So where do we go from here? What can we do to ensure that the systems designed to oversee the most important sectors of our economy—healthcare, financial services, construction, the automotive industry, and others—cannot be subverted to nefarious ends?

One way is to focus regulatory measures on developing better physical components of the goods that power our lives so that they cannot be changed. Alternatively, regulatory agencies can encourage the use of trustworthy physical sensors, such as pressure sensors, speed sensors, and odometers, which can reveal if they have been tampered with. At NYU, we research and design electronics sensors that are trustworthy by design—with these types of considerations built in from the beginning. From a systems perspective, the software that orchestrates the sensors and actuators, as well as the analysis and reporting software in such cyber physical systems can be made open source. This would facilitate scrutiny by the public and the ability of the security research community to probe for flaws and vulnerabilities—whether inadvertently or maliciously introduced. This approach has the benefit of protecting various proprietary software developments while allowing enhanced scrutiny of the systems that orchestrate regulatory compliance. Finally, from a verification perspective, there should be continuous, in-field monitoring instead of periodic monitoring at test locations, and “red team” assessments by external parties rather than relying on self-assessment processes.

In short, we must create processes not only designed to supervise the corporations, but also to reinforce the regulatory mechanisms that keep them in check. As Ronald Reagan once said, we should trust but verify—and much more creatively so.


Zachary K. Goldman, JD, is the Executive Director of the Center on Law and Security and an Adjunct Professor of Law at NYU School of Law.
Ramesh Karri is Professor of Electrical and Computer Engineering at the NYU Tandon School of Engineering and a co-founder of the NYU Center for Cyber Security

Subscribe to Lawfare