Thoughts About the Revised Lieberman-Collins Cybersecurity Bill
As I noted on Friday, Senators Lieberman and Collins have released a Manager’s Amendment in the form of a substitute which reflects some significant changes to their original bill. You can access the full text of the amendment here. Talk in the Senate is that Senator Reid plans to call the bill up for consideration in the middle of this coming week. One of the critical questions yet to be answered is whether he will “fill the tree” (a procedural process whereby all of t
Published by The Lawfare Institute
in Cooperation With
As I noted on Friday, Senators Lieberman and Collins have released a Manager’s Amendment in the form of a substitute which reflects some significant changes to their original bill. You can access the full text of the amendment here. Talk in the Senate is that Senator Reid plans to call the bill up for consideration in the middle of this coming week. One of the critical questions yet to be answered is whether he will “fill the tree” (a procedural process whereby all of the possible places for amendment are filled before the bill comes to the floor, making it impossible for Senators to offer any amendments other than the ones the Majority Leader has already placed in the tree) and whether the motion to proceed to consideration of the bill will garner the requisite 60 votes needed. I have no real insight on those questions.
So, I’m left with actually thinking about the substance of the bill. I’m going to presume that readers are familiar with the earlier postings about the original version of Lieberman-Collins. If not, they can be accessed, here, here, and here. But I don’t want to reinvent the wheel, so this post is mostly about the changes to the bill. In short, I think the regulatory changes still leave a problematic system in place and that the information sharing changes are a step backward. Here's why:
Regulations -- I’m please to say that, at least in theory, the mandatory regulation régime is gone. (As I’ll discuss below, it may NOT be gone in practice, but that’s a different issue). So the normative question that Jack Goldsmith and I have been discussing (here, here, and here), is now more or less moot.
While the original bill proposed by the Senators would have assigned DHS the role of creating mandatory cybersecurity standards for critical infrastructure industries, to its credit, the revised bill moves away from that mandatory system. Instead the bill now requires the creation of industry best practice standards for protecting critical infrastructure. Those standards are voluntary and the bill offers incentives to try and push the owners of critical infrastructure to adopt them. Those incentives include liability protection, priority assistance for response to cyber threats and access to classified information about threats.
I think there are still several questions about this new approach that need answering. Let’s list them seriatim:
One incentive for adopting the voluntary standards is that the government will withhold cyber threat and vulnerability information from those private sector actors who do not adopt the standards. It is at least reasonable to ask whether this is the right carrot. Should the government be in the position of denying government threat information to critical infrastructure owners who choose not to adopt the voluntary standards (especially if that decision may be for justifiable business reasons of cost). If the infrastructure in question is truly “critical” it is in America’s collective interest to protect them as much as we can. Denying them the informational tools to do so because they don’t follow the government’s lead may be cutting off our nose to spite our own face.
The liability protections provided as an incentive to adopt the voluntary standards are pretty weak tea. If a private sector actor adopts the voluntary standards but still gets hacked, causing third party damage it can still be sued for consequential damages. All that the bill offers as an incentive is a prohibition against punitive damages. But I wonder if that is really a concession at all. I would have argued that even in the absence of an explicit liability protection any company that adopted government approved cybersecurity standards and could show their compliance with them would be immune from punitive damages. After all, by hypothesis, they will have been using the government-identified best available tactics and still gotten hacked hardly the sort of conduct that would ever lead to punitive damage assessments. If the liability protection had been greater (e.g. for consequential damages) that would be a real incentive, but as it is they appear to be of relatively little value.
I’m still not convinced that even a voluntary system of government, organized standard setting is wise. Here’s why: First, voluntary standards will take time to develop. Over the multi-year process while they are being adopted we can expect that innovation and investment in cybersecurity products will substantially slow. No investor of any thoughtfulness will invest in a product that might not be one of the standard-approved methods of providing cybersecurity, even if it might be a better one. Thus, there is a significant cost in security development that derives from the transaction costs of creating the standards.
Second, once articulated these government standards will likely form the basis for lawsuits against those who don’t adopt them. It may be that a liability system is the best way to deal with the acknowledged externalities of cybersecurity, but I would be a great deal more comfortable if the standards were non-governmental and if the incentives for lawsuits were minimized (e.g. through damage caps, and a prohibition on attorney fees).
Third, color me skeptical, but a voluntary standard system once in place, is but a short step from a mandatory one. Indeed, Senator Lieberman has already said that if industries don’t adopt the voluntary standards Congress will make them do so. But given the government’s own (poor) track record of combatting cyber threats, and the hierarchical way in which it develops rules and regulations at a glacial pace I remain deeply skeptical that the government can set the right standards in a dynamic environment like cybersecurity.
Finally, I have to question whether the regulatory concessions made by the bill are as significant as they appear. To my mind, some language in the bill suggests that the “voluntary standards” may not be so voluntary after all. Under section 103(g) of the bill, Federal regulatory agencies are free to make the voluntary regulations mandatory as to the sectors they regulate. In addition, if they choose not to do so they are required to report to Congress why they have not. About this three things can be noted: a) to the extent a regulatory agency has authority to set mandatory cybersecurity standards already, the provision of voluntary standards in the bill is unnecessary; b) it also smacks a bit of bait and switch, since the authority to impose mandatory rules will continue; and c) requiring a report when the agency chooses not to act is a strong incentive to make the voluntary rules mandatory and is Congress putting its thumb on the scales. Maybe I’m misreading this provision – but I don’t think so.
Information Sharing -- The other critical portion of the bill is the set of information sharing provisions. The revised bill continues the earlier focus of the prior version on the creation of cybersecurity exchanges in the Federal government to facilitate sharing threat and vulnerability information. As drafted, there continue to be questions about whether or not the bill will achieve these objectives. Here’s why:
First, the bill requires the creation of a Federal cybersecurity information exchange, presumably led by DHS. Unlike the earlier version, however, this bill insists that any additional Federal cybersecurity exchanges be “civilian” in nature. It is a good idea to strengthen DHS’s role in this program. After all, most of the network is civilian in nature. But some will argue that the bill makes a mistake by deliberately excluding the possibility that DoD and/or NSA might also operate an exchange. While I understand, and agree with, the concern for the militarization of cyberspace defense the structure now authorized may be inefficient and ineffective. It will require that in all cases DoD components go through DHS to get private sector cyber threat information. One wonders if there isn’t a subset of cases where direct military engagement is both appropriate and even necessary.
Second, the law would limit the sharing of cyber threat information with other Federal agencies – recreating the stovepipes that we have identified as the cause of the 9/11 attacks. This is a step backward. The original version of the Lieberman-Collins allowed disclosure to law enforcement if the information related to a crime that has been, is being or is about to be committed.
That was quite broad – perhaps even too broad. But now the pendulum has swung the other way. Under the revised bill information sharing law enforcement is authorized only to prevent a cyber threat, to combat an imminent threat of death or serious bodily injury, or to protect minors (e.g. from child pornography). While clearly important goals, one wonders why sharing with law enforcement to protect national security (say against a biological or a nuclear threat) or to combat serious crime like the Mexican narco-terrorists is prohibited.
The liability provisions of the revised bill will also cause some concern. The bill would authorize lawsuits against the US government for violations of the limitations outlined above and also authorizes the award of attorneys’ fees – clearly an attempt to create an incentive for lawsuits. More importantly, the bill provides only limited liability protection for the private sector actors who share information. As drafted it protects those who act in good faith and without gross negligence.
Facially, of course, those limitations are appealing – who could possibly want to protect any private sector company that acted in bad faith or was grossly negligent? But as anyone familiar with litigation knows, this sort of fact-based formula fosters litigation risk. What it means, of course, is that most plaintiffs will be able to avoid summary judgment and that private sector actors who share will have to bear the risk of years-long litigation with attendant costs. Even if they were likely to prevail in the end the transaction and reputation costs will make settlement a preferable option. This provision alone is likely to insure that nobody actually shares cyber threat information at all, for fear of being sued. Certainly that would be my advice if I were General Counsel at a major corporation.
***
One final note: Earlier, this year I suggested that cybersecurity legislation is dead for this year. I still think it is. Even if the Lieberman-Collins bill gets acted upon by the Senate (a big if) I simply cannot see the House agreeing to a new regulatory system. Of course, I didn’t think the bill would get this far … so that just goes to show how little I know!
Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.