Published by The Lawfare Institute
in Cooperation With
News of the SolarWinds hack emerged with reports the incident had triggered an emergency Saturday meeting at the National Security Council. In the weeks that followed, the story dominated headlines. Whereas most offensive cyber operations rarely receive concentrated focus, the name of a Texas-based information technology software company, SolarWinds, became ubiquitous across mainstream news outlets and quickly synonymous with the Russian hacking operation that targeted it. Policymakers, corporations and the entire cybersecurity industry were soon asking, “How do we address SolarWinds?”
Russian state actors had breached SolarWinds’ network to insert a backdoor into a software product used in critical networks across the United States. The hackers then snuck through their carefully hidden entrance to infiltrate the State Department, Treasury Department, Microsoft, and many other government and corporate networks from among the thousands of compromised organizations now open to them. The scale of the attacks, along with the high-profile nature of many of the targets, encouraged the widespread coverage and subsequent reaction from elected officials. Congressional hearings were scheduled, and then-President-elect Biden pledged to address the issue.
The Senate hearing focused in large part on prevention of cyber espionage, which reflected the Capitol’s sanguine attitude toward the issue. While members of Congress questioned SolarWinds leadership about how they might prevent such a backdoor in the future, attackers installed an entirely different backdoor on hundreds of thousands of servers around the world by leveraging an accidental vulnerability in Microsoft Exchange software. While this wasn’t a supply chain hack per se, these new actors carried it out with more recklessness than their Russian counterparts: They compromised a much larger number of networks, leaving a trail of vandalism in their wake. Yet this campaign failed to capture the sustained interest of the American public or many of its policymakers.
The SolarWinds hack initially grabbed headlines because of the sheer number of networks affected, but this belied the fact that the Russian operators had intentionally disabled almost all their backdoors without ever using them—they were carefully targeting a smaller number of networks. The Exchange perpetrators, conversely, had indiscriminately installed backdoors on any vulnerable server they could find on the internet—an order of magnitude more compromises than the Russians achieved—and had left these backdoors wide open with easily guessed, hard-coded passwords. Whereas the former hack was a carefully executed espionage campaign, not unlike those carried out by the U.S., the latter resulted in tens of thousands of networks left to the mercy of a thriving ransomware industry.
The White House recently named the perpetrators behind the Exchange hack as Chinese government operatives. More important than public attribution, the United States needs to build international support for drawing lines between responsible and irresponsible operations in cyberspace. If the SolarWinds operation was a case of somewhat responsible hacking within the bounds of acceptable state action (even if Russia is far from a responsible actor in cyberspace), the Exchange operation, by contrast, demonstrates how an irresponsibly conducted espionage operation can escalate into collateral damage and instability.
The sense of crisis created by these two operations should not be wasted. Despite critical preventive efforts, offensive operations will continue apace in the foreseeable future—conducted by the United States, its allies and its adversaries. The choice is whether and how to engage in them responsibly and minimize cost to societies. For there are better and worse ways for governments (and their explicit or de facto contractors) to operate in cyberspace. Benign countries should cooperate now to promote verifiable, technical norms for responsible offensive cyber operations.
The U.S. and its allies have previously sought to institute political norms against general categories of nation-state cyber activity. But broad norms, such as the one against all “supply chain hacks,” are sometimes technically ambiguous and impossible to enforce. Further, it would be hard to justify to adversaries why they should willingly constrain themselves from a potent method of access when they have no reason to believe the U.S. will reciprocate. A more diplomatically and technically plausible argument that Secretary of State Antony Blinken or National Security Adviser Jake Sullivan could credibly make to their Russian and Chinese counterparts is for reciprocal agreements not to use irresponsible techniques, such as haphazard backdoors and indiscriminate targeting, which cause significant instability and collateral damage.
More broadly, the U.S. should lead an international effort to decompose cyber operations into their component methods and behaviors and assess each on a spectrum of responsibility. This will be technically challenging for political leaders and others to understand, but cyber operators and those seeking to defend against them will appreciate the various distinctions, as we try to explain them here. Indeed, a key need in this field is for political leaders in dominant cyber powers to become educated about important variables in cyber operations and to engage each other in bringing oversight to them. This approach could be augmented with engagement between the leadership of states’ various operational entities. Adversaries will be more amenable to frank acknowledgement of a shared reality and accountability, rather than demands for wholesale cessation of all cyber operations.
Offensive Cyber Operations
Nation-states engage in many virtual activities that fall into the amorphous space of offensive cyber operations: websites taken offline by an unprecedented influx of traffic, hospital networks irretrievably encrypted by malware, state secrets quietly copied from government computers. While any of these might colloquially be referred to as an “attack,” it is helpful to distinguish between cyber espionage—obtaining and exfiltrating confidential information—and cyberattacks, which are intended to achieve some kind of deleterious effect on an adversary’s system. This might be the difference between hacking a computer to read confidential data, or to corrupt or erase it.
Although many operations become public because they are disruptive attacks by design, most offensive actions by state actors are attempts at espionage, not destruction. These efforts happen quietly and frequently, aided by the plethora of insecurities plaguing computer networks and software. The offense has an asymmetric advantage; it will always be easier to find a single way inside a system than to prevent all possible methods of ingress. Many of these technical problems are still decades away from being ameliorated. Moreover, prevention is a question not simply of technical ability but of incentive alignment, extensive deployment and effective use—in both the public and the private sectors.
We do not suggest that investing in defense is Sisyphean. On the contrary, the U.S. government must undertake a multipronged campaign to improve its cybersecurity. But this should be done without saddling the still-nascent effort with quixotic expectations, which then warp the U.S. reaction whenever spying is uncovered. If widespread prevention is impossible, protecting the internet from systematic failure will require shifting focus to shaping competitors’ actions toward predictable outcomes and effects.
Reduce Risks of Unintended Effects
Descending into the technical depths of cyber operations leads to the realization that a cyberattack is difficult to distinguish from espionage. The techniques used to gain access to a network and then navigate through it are often the same, regardless of the end goal. So, too, in many cases are the tools and infrastructure used. Differences between an attack and espionage might only appear once the operators begin executing their plan to cause damage or exfiltrate data. But neither is dispositive, as an attacker might care less about stealth, but a mediocre or lazy spy could also lack stealth and, worse, cause accidental damage.
While there is no clear technical divide between different kinds of cyber operations, there are important geopolitical differences between them. No state wants to be spied on, but suffering heavy damage is worse. If damage is accidental or careless, due to bad tradecraft, it is unnecessary and all the more outrageous. It also could easily be misinterpreted as an attack, which can lead to inadvertent escalation. The technical ambiguity of cyber operations allows their intent to be ambiguous or possibly mendacious and creates potentially dangerous unpredictability. This is why norms should aim to clarify intent through responsible practices.
Responsible Offensive Behavior
The technical operational norms we suggest should address irresponsible actions that cause adverse effects, such as collateral damage. Consideration should also be given to verifiability. One aspect of reckless cyber activity is that it is often noisy, but even more subtly reckless behavior can be identified through proper technical analysis. The ability to document irresponsible behavior lends credibility to norms against it.
Test Tools Before Use
Developing tools to break into systems can be difficult: Hackers must work around the charming idiosyncrasies of computer internals, such as randomized values, nondeterministic behavior and unaccountable unknowns. Important programs could stall, the computer might crash or sensitive data could be deleted. Exploits may not be compatible with certain software versions, hardware versions or configurations.
The fragility of these tools requires a responsible actor to test them as thoroughly as possible before use. Cyber ranges are systems with the ability to emulate real-life computer networks and are designed to allow operators to experiment with myriad potential configurations before use. Testing is not infallible, but it can help actors establish a baseline level of assurance in the stability of their exploits that is visible to incident response teams and higher-level strategic reporting.
Implant testing is important as well. Poorly written malware can run awry and cause unintended issues. Ironically, like any software, malware has vulnerabilities that could be exploited by a third party to gain access to the infected system. Malware should be tested extensively for safety as well, to ensure that it performs only intended tasks.
Avoid Indiscriminate Targeting
The Microsoft Exchange hackers provide an excellent example of indiscriminate targeting. Responsible actors should carefully select targets, identify any risk of collateral damage, and plan accordingly. The actions taken by the Russians in the SolarWinds hack, to only send a second-stage backdoor to a select few targets, is a preferable tactic.
Further, some targets, such as hospitals, are considered by current cyber norms to be off limits. But hackers who indiscriminately target random IP addresses could inadvertently damage critical networks and create real-world peril. The targeting may be seen as deliberate and could create dangerous escalation between states.
Prohibit Targets Throughout the Operational Life Cycle
Norms often focus on the operational goal, rather than the entire operation’s life cycle. For instance, norms against targeting critical infrastructure often fail to address the case where hacked critical devices were merely ancillary to the operation. Offensive actors will often use “pivot points,” which are easily hacked devices such as routers. These points allow a hacker to launch their exploits from an easily hacked device, and gain easy entry into a critical network. Russian state hackers have been caught pivoting off of medical devices, when they could have easily used something else much less dangerous. In retrospect, it is sometimes easy to identify devices used to pivot, which allows for verifiability. When assessing pivot points, C2 infrastructure, and other elements of the operational life-cycle, responsible actors should take care when selecting the devices they target.
Automated actions during an offensive operation run the risk of becoming uncontrollable, as malware propagates from computer to computer. This can often result in destructive, globe-tunneling worms if the malware is not written with appropriate risk reduction measures and fail safes. Worms should have clear boundaries on where they can go, to prevent indiscriminate targeting and out-of-control spread. An operator should be able to stop or kill the worm at all times, using a built-in kill switch. Worms should also incorporate conditional execution logic, leveraging deliberate tactics to sense and understand the target environment to prevent execution against prohibited targets and restrict effects to valid objectives. In addition, worms should leverage deliberate capabilities to sense and understand the target environment, and determine their actions accordingly, prevent execution against prohibited targets, and restrict effects to valid objectives. In almost all cases, a smarter worm is a safer and more responsible worm.
Prevent Criminal and Third-Party Access to Backdoors
Responsible actors should use technical means to prevent criminals and third parties from using their backdoors, as was the case with the Microsoft Exchange hack.
Bugs in software are typically accidental, but at times a bug may be what’s known as a “bugdoor,” a flaw deliberately introduced by a developer that can be exploited for backdoor access. Introducing trivial bugs into software makes it easier for criminal elements to find and exploit them. Bugdoors, when used, should be well hidden and not easily identifiable by bad actors analyzing the software.
Responsible Operational Design, Engineering and Oversight
With many large, bureaucratic organizations, intention does not translate into action unless a serious effort is given to process. As is the case in the physical domain, states that lack control and oversight of their cyber operators risk causing damage without their leaders’ prior knowledge. Among other things, this could result in avoidable escalation and consequences for the state at fault. Ensuring responsible activity requires instituting processes that require suitable political authorization and oversight over technical quality control to reduce risk and collateral damage.
Such oversight should require the logging of operator behavior, so that it can be reviewed after an action. If, as Lawrence Lessig has stated, “code is law[,]” then developers and operators are responsible for decisions that previously would have been reserved for policymakers and other elements of the sovereign state.
No Good Reason to Resist
To be effective, norms must be conducive to the interests of key actors and technically feasible. Policymakers in cyber-power states, including at the Cabinet level, need to learn more about how cyber operations are executed and work in practice. Technical operators need to appreciate why caution, precision, and minimization of unintended harm are so important to their leadership, their countries, and the world. Including technical experts at the table will reduce risks in both directions—from the technical to the political and the political to the technical.
In all of this, the responsibility or irresponsibility of behavior should be assessed across time and IP address space. No single misfire by a state group is proof of irresponsibility, and no single entity carries out all state operations. Assessing adversary behavior with the required nuance and finesse will require a high degree of expertise in analyzing specific actors. For instance, in Russia, the SVR, GRU and FSB intelligence agencies all operate in distinct ways, and so do subgroups within each of those entities. Some of the most irresponsible and disruptive behavior in cyberspace occurs at the nexus of adversary states and the criminal hacking groups they harbor. This is particularly egregious in Russia, where the state allows ransomware gangs to flourish if they avoid targeting Russia or its allies. In exchange, the state benefits by nationalizing the talent and infrastructure of these groups as needed.
Governments that harbor cyber criminals, or themselves engage in criminal behavior, may not see a shared interest in limiting damage. But this assumes that there is little risk that sloppy or unrestrained cyber operations could cause the target to escalate—intentionally or not—or could turn increasing numbers of countries against the states whose hackers wreak havoc. The concepts discussed in this post will not ameliorate blatantly dangerous behavior in the near term. But they would clarify what the U.S. considers to be an irresponsible activity, moving the nation away from a murky model of outrage at every Russian phishing email. By articulating and promoting the discussion of responsible operations, the U.S. could gain international political leverage.
Admittedly, it will take a certain hardheadedness and even cynicism among U.S., Russian and Chinese leaders to discuss best practices in malware development and placement, but this is the nature of diplomacy in the 21st century. Major powers bear responsibility for reducing systemic risk in cyberspace, and to do this they must make offensive operations more predictable. Each country wants to expel spies from its computer networks, and each will struggle to design better defenses against cyber operations. But technical panaceas are unlikely. Better to create codes of honor among spies, and their bosses.
Editor's note: The piece has been updated to provide further clarity regarding the scope of the SolarWinds breach.
The views and opinions expressed here are those of the author(s) and should not be interpreted as the official policy or position of any agency of the U.S. government or other organization.