Published by The Lawfare Institute
in Cooperation With
The New York Times reported on June 15 that “the United States is stepping up digital incursions into Russia’s electric power grid in a warning to President Vladimir V. Putin.” In particular, the Times reported that the United States has deployed code “inside Russia’s grid and other targets”—that is, “potentially crippling malware inside the Russian system, ... intended partly as a warning, and partly to be poised to conduct cyberstrikes if a major conflict broke out between Washington and Moscow.” The article also noted that this step would represent a major escalation in the ongoing cyber conflict between Moscow and the United States.
That claim is probably true, though one has to wonder if June 15 should represent the point in time when the United States achieved these capabilities or only the point in time when the United States started talking publicly about its capabilities. The former, of course, would be the escalation step. It would not be the latter, unless we assume that the Russians were entirely oblivious to U.S. attempts at penetrating their electric grid before the Times story.
The story sheds light on one canonical argument about deterrence of cyber conflict. According to deterrence theory, the threat to carry out a punitive response must be credible to an adversary. The canonical argument about the impossibility of establishing the credibility of a cyber threat has always depended on the assumption that there was only one way to execute a cyber mission, and that demonstrating it to establish credibility would destroy its future value as a weapon in an operational nondemonstration context. Why? Because to demonstrate it would reveal critical secrets of that capability to the adversary, who would then use those secrets to remediate the vulnerabilities that enabled that offensive capability in the first place.
But the Times story noted that U.S. officials did not object to reporting on the malware implants that would give the United States capabilities to manipulate and/or shut down portions of Russia’s electric grid and presumably other critical infrastructure. Their lack of objection suggests that there must be multiple ways to carry out those missions—if there were only one way to carry out that mission, they would have been quite foolish to have revealed even the existence of such a way.
So what I learn from this story is that the assumption that there is but one singular method for carrying out a cyber mission is unlikely to be universally valid. Further, understanding the buggy nature of software development (which is unlikely to be significantly different in Russia than in the United States) leads me to believe Russian critical infrastructure overall has many vulnerabilities that could be exploited, just like U.S. critical infrastructure. This point is consistent with at least one analyst’s judgment that vulnerabilities are plentiful rather than rare.
This point in turn casts doubt on the premise that a demonstration of an offensive cyber capability will destroy its future value as an operational asset. Perhaps that particular capability might be negated, but other cyber capabilities to carry out the mission are likely to be available. The broader implication for deterrence of cyber conflict is that, at least under some circumstances, the technical credibility of a threat can be demonstrated.