Published by The Lawfare Institute
in Cooperation With
After Colonial Pipeline suffered a ransomware attack in May 2021 and took its 5,500-mile system offline for nearly a week, the Transportation Security Administration (TSA) issued a set of first-ever directives imposing mandatory cybersecurity requirements on pipeline operators. Industry balked, criticizing the rules for being too prescriptive. On July 21, 2022, the TSA issued a revised directive, heralded by the government and praised by industry for being “performance-based.”
But the revised TSA directive is not, strictly speaking, performance-based. It is to some extent risk-based, and it does offer industry flexibility, but it has no criteria for measuring the performance of the cybersecurity controls it imposes. Instead, the revised rule fits mainly within a regulatory approach often called “management-based,” meaning that it requires regulated entities to adopt certain management practices. The lack of measurable outcomes in the revised rule casts doubt on the invocation of the concept of performance-based regulation in this context—and in the context of cybersecurity more broadly.
The specific selection of controls in the TSA’s final directive, representing the results of negotiation with industry, highlights the challenge of assembling a package of cybersecurity controls that is both comprehensive and achievable. The rhetoric around the TSA’s directives highlights confusion about what makes a rule performance-based and obscures the reality that, so far, there is no way to measure cybersecurity outcomes. A clearer understanding of the different approaches to regulation, and their respective pros and cons, could help cybersecurity policymakers. Ultimately, the revised TSA directive suggests that a hybrid approach that draws on multiple different modes of regulation represents the best path forward.
Rebalancing the Mix of Cybersecurity Voluntarism and Mandates
Aside from the usual opposition that operators of critical infrastructure have against increased regulation, there is pretty widespread recognition (for example, here, here, and here) that U.S. policy should be rebalanced to mandate more robust cybersecurity from key private-sector entities. While collaboration between the government and industry will always be a necessary element of the cybersecurity ecosystem, the decades-long policy, across administrations, that reduced public-private partnership to voluntarism alone has left key systems woefully and perennially vulnerable. In response, across a range of sectors, policy initiatives are moving forward that would create more explicit cybersecurity obligations, including soon-expected rules for the drinking water sector, cybersecurity language in the House privacy bill (H.R. 8152), and a possible Federal Trade Commission (FTC) rulemaking on commercial surveillance and data security.
The pursuit of performance-based requirements is likely to be a major theme of these efforts. Delineating what is and what isn’t performance-based regulation and how it fits with other approaches to regulation is the first step in assembling the most effective mix of approaches for cybersecurity.
What Is Performance-Based Regulation?
Performance-based regulation imposes outcomes objectives on the targets of regulation rather than telling them exactly what actions they must take or technologies they must adopt. Regulated firms are allowed to determine how to achieve those goals or targets. Ideally, they innovate and choose the cost-effective solutions most suited to their unique circumstances. Performance-based regulation has been embraced globally for everything from fire codes to regulation of electric utilities. Take the performance-based emissions standards set by the Environmental Protection Agency under the Clean Air Act. One typical provision states that no metal furniture surface coating operation “shall cause the discharge into the atmosphere of VOC [volatile organic compounds] emissions … in excess of 0.90 kilogram of VOC per liter of coating solids applied.” A clear, measurable objective, but no direction on how to achieve it.
Despite widespread support for the concept, University of Pennsylvania professor of law and political science Cary Coglianese concluded in 2017 that “the case for performance-based regulation still remains largely theoretical.” Moreover, as Coglianese—who has done the deepest and most balanced writing on the subject—has cataloged, performance-based regulation has its limits. The approach has suffered some spectacular fails, leading to leaky buildings in New Zealand and Volkswagen’s design of a device specifically intended to defeat emissions testing equipment.
Nevertheless, U.S. policy is supposed to prefer performance-based regulation. Presidents mandated it at least as early as 1993, when Bill Clinton ordered that regulators shall, “to the extent feasible, specify performance objectives, rather than specifying the behavior or manner of compliance that regulated entities must adopt.” President George W. Bush reaffirmed the principle in 2003 (in a circular still in effect), and President Barack Obama did the same in a 2011 order. Without specifically mentioning the performance-based approach, President Joe Biden reaffirmed the Clinton and Obama executive orders early in his administration. And Congress has endorsed the concept, directing agencies, in developing standards, to give preference where appropriate to performance criteria rather than design criteria.
A performance standard can be very loose, such as the 14-word rule that requires civil aircraft to be “airworthy.” (Don’t worry—there’s a lot behind that one word.) Or it can be very tight, such as the standard that requires automobile brakes to bring a passenger car traveling at 20 miles per hour to a complete stop in no less than 20 feet. And a performance-based rule, like any regulation, can be aimed at changing corporate behavior, or it can serve to ratify dangerous or harmful practices. But there is one consistent point in all the work on performance-based rules (ranging from Coglianese’s postmortem on the Volkswagen scandal to the principles articulated in 2020 by the then-chair of the Commodity Futures Trading Commission): Performance-based regulations will work only if government agencies are able to specify, measure, and enforce performance.
A 2018 National Academies report on designing safety regulations for high-hazard industries, which Coglianese pointed me to, examines the differences between performance-based rules and other regulatory approaches—with striking relevance to cybersecurity. In addition to performanced-based regulation, there is prescriptive regulation, which mandates a specific technology or other solution. “Prescriptive” is widely used as a pejorative by opponents of regulation, often combined with “one-size-fits-all” and “tech-mandate.” But in many contexts, prescription is unavoidable or desirable or both. A third approach is management-based regulation, which requires regulated entities to undertake certain processes, such as develop risk mitigation plans or provide employee safety training. Examples of management-based regulation in cybersecurity include the requirements in the FTC rules for financial institutions that they “develop, implement, and maintain a comprehensive information security program” and that they periodically perform a “risk assessment that identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information.”
Note that a management-based regulation is prescriptive in the sense that the regulated entity cannot get out of developing the plan or doing the assessment, but it is neither performance-based (there is no specification of risk tolerance levels or outcomes metrics) nor does it require any specific technology solution. Instead, a management-based approach presumes that requiring organizational attention to risks and mandating the establishment of internal processes will reduce the probability of failures, even if that reduction may not be provable empirically. To put it another way, performance-based regulation focuses on ends or outcomes, while management-based or other prescriptive forms of regulation focus on means or inputs, which may be technologies or management processes.
The National Academies study stated that it is impossible to say in the abstract that one of these approaches is better than the others. Too many contextual factors, such as the nature of the regulatory problem, the characteristics of the industry, and local conditions (especially the regulator’s capacity) can change the distribution of advantages and disadvantages of each type of regulation. With respect specifically to management-based approaches (which the National Academies study noted were often inaccurately labeled “performance-based”), the study warned: “Requiring management activities ... does not assure the regulator or regulated industry that these activities actually reduce the risk of catastrophic events. Requirements for risk analysis and the development of management programs do not necessarily even demand that such programs, once established, lead to a demonstrable end state of improved safety.”
The Revised Pipeline Directive
Among the most prescriptive aspects of the TSA’s initial pipeline directive, issued in July 2021, was an insistence that certain measures be applied comprehensively and that they be implemented on strict deadlines. In particular, the directive required pipeline operators to reset all passwords within their information technology (IT) systems by August 25, 2021, and reset passwords on all equipment within their operational technology (OT) systems by November 23, 2021. The directive also set strict deadlines for implementing software patches—requiring, for example, that all patches be installed within 30 days of availability. Pipeline operators argued that the provisions did not account for the on-the-ground reality of their systems and thus were impossible to implement.
The revised directive completely revamped those particular requirements. First, instead of covering all systems, it focuses on “critical cyber systems,” defined broadly as any IT or OT system or data that, if compromised or exploited, could result in operational disruption. As to passwords, it requires companies to adopt identification and authentication policies and procedures designed to prevent unauthorized access to critical cyber systems. These policies and procedures must include a schedule for resets of passwords and other memorized secret authenticators, along with documented and defined mitigation measures for critical systems that will not have passwords reset in accordance with the required schedule. For patching, the revised directive requires a patch management strategy that ensures all critical security patches and updates on critical cyber systems are current. This strategy must include a risk methodology for determining the criticality of patches and an implementation timeline based on criticality, giving priority to all patches on the catalog of known exploited vulnerabilities compiled by the Cybersecurity and Infrastructure Security Agency.
The revised directive eliminated a requirement to adopt “zero trust” (the concept that no implicit trust should be granted to assets or user accounts). It also dropped an explicit requirement to establish passive domain name system capabilities and a requirement subjecting IT and OT systems to application allowlisting (a security control that allows only preapproved applications and processes to run). A requirement for weekly scans by antivirus/antimalware programs was replaced by a requirement to implement continuous monitoring and detection policies and procedures to prevent, detect, and respond to threats and anomalies. A section on logging was arguably strengthened by requiring contiguous collection and analysis of data for potential intrusions and anomalous behavior.
Other changes eliminated the specification of particular controls. For example, instead of saying that pipelines must employ “filters” sufficient to prohibit ingress and egress of communications with known malicious IP addresses, the revision states that they must have “capabilities” to prohibit egress and egress communications with known or suspected malicious IP addresses.
It Has Risk-Based Elements, but Is It Performance-Based?
To assess how the TSA’s revised pipeline directive fits within the concept of performance-based regulation, consider the Consumer Product Safety Commission (CPSC) rule for those pesky child-resistant caps on prescription drug containers. The rule specifies that a cap must withstand the efforts of 85 percent of a group of 200 children between 42 and 51 months old for at least five minutes and 80 percent of the group for at least 10 minutes. The rule is performance-based: In 13 pages of single-spaced, 12-point font, it details how to conduct the child-resistant tests (including this: “If one or both children have not used their teeth to try to open their packages during the first 5 minutes, the tester shall say immediately before beginning the second 5-minute period, ‘YOU CAN USE YOUR TEETH IF YOU WANT TO’”). But nowhere does the rule suggest how to actually design a container. The rule is also risk-based: It accepts the risk that as many as 15 percent of 4-year-olds (and unspecified higher percentages of older children) will be able to open the protective containers within five minutes and access dangerous medicines. Given all other factors (including the need to have a pill container that adults can open), the CPSC decided that this level of risk was acceptable.
Like the CPSC rule, the revised pipeline rule has elements of a risk-based approach, although without the precise quantification of risk in the CPSC rule. On patch management, it explicitly requires the adoption of a risk-based methodology. With respect to OT systems, the rule recognizes that some patches or updates could actually degrade operational capacity, so it states that, if patching a specific OT system would be severely disruptive, the patch management strategy must include additional mitigations that address the risk created by not installing the patch or update. Curiously, though, while the directive talks about risk reduction, it does not expressly say that pipelines must build their cybersecurity plans in response to a risk assessment. In this, it varies from most other management-based cybersecurity frameworks, which require the cybersecurity plan to be based on the risk assessment, updated periodically.
Is the revised directive performance-based? Some items arguably are, but only because they seem to be stated in absolute terms. For example, the requirement to block and prevent unauthorized code, including macro scripts, from executing, may be measurable, in that the desired outcome is specified (no execution of unauthorized code), although one might worry that “unauthorized code” is not specific. But the requirement to “document and define” mitigation measures for components of critical systems that will not have passwords reset and to have a timeline to complete those mitigations is definitely not performance-based. It says nothing about how well those mitigations must work. Likewise, the requirement to “[i]mplement access control measures ... to secure and prevent unauthorized access” to critical systems, seems to involve a prediction, not a measurement of performance. (And it seems unlikely that the drafters of the rule meant that regulated systems had to prevent all unauthorized access by all attackers, no matter how advanced or persistent. That would just be impossible.)
The risk-based patch management system likewise sets an unmeasurable standard: “[r]educe the risk of exploitation of unpatched systems through the application of security patches and updates ... consistent with the Owner/Operator’s risk-based methodology.” Reduce how much? The CPSC was very clear: A risk of 15 percent of 4-year-olds opening the containers within five minutes is acceptable, but a risk that 16 percent could do so is unacceptable. No one seems prepared to adopt a binding rule that says “prevent unauthorized access by 85 percent of attackers.”
This is one of the conundrums at the heart of all cybersecurity: Perfection is not possible, but risk is not easily quantifiable. A risk-based approach, by definition, does not demand perfection. Hence the repeated use in cybersecurity statutes and regulations of the word “reasonable,” and hence the cost-benefit, totality-of-the-circumstances test the word “reasonable” implies. But unless regulators are prepared to quantify how far short of perfection entities may fall in complying with a cybersecurity requirement, there is nothing to measure, and a regulation cannot truly be said to be performance- or outcomes-based.
A Hybrid Approach for Cybersecurity
At some level, the label applied to a regulatory approach does not matter. As the National Academies report noted, “performance-based” is often used to mean “flexible,” applied to management-based regulations that give regulated entities broad flexibility in designing internal systems but that have no performance metrics. However, there is risk in the lack of clarity: Describing regulations that require management systems as performance-based implies that regulators are holding firms accountable for achieving specified outcomes, such as demonstrable reductions in the frequency of incidents, when in fact there is no such assessment—and no possibility of one, because there is no outcomes-based metric.
The management-based approach is common in cybersecurity frameworks. Indeed, whether it is the National Institute of Standards and Technology (NIST) framework for critical infrastructure or the International Organization for Standards (ISO) 27000 series or the controls that can be compiled from the FTC’s enforcement actions, many irreducible elements of a cybersecurity posture are management-based. Some of these controls will be expressed in fairly high-level terms: A regulated entity must have a cybersecurity plan based on an up-to-date inventory of assets and a risk assessment taking account of the entity’s circumstances, including its size and the sensitivity of its operations. Entities must have an access control policy and procedures. They must reassess their plan periodically and update it in light of changing threats and defenses. An entity must have an incident response plan. Other controls are more granular but still process oriented: An access control policy must manage access rights based on the principles of least privilege and separation of duties. Employees must be provided some training or awareness. An entity must authenticate users, update and patch software, follow sound password management policies, and monitor its system for compromise.
Most of these controls are in the revised TSA directive. (Oddly, employee training is not mentioned, and an asset inventory is not specifically required.) All of them fit within the concept of management-based regulation: They require regulated entities to adopt certain management practices (in that sense, they are prescriptive) without specifying any particular technology—and without specifying any outcome. Whether an entity has such controls in place is auditable and measurable, but the measure is the existence of the particular control, not its performance. Moreover, compliance can easily become a binary question (the policies either exist or do not exist) without regard to how good they actually are. Nor is the existence of these management policies a guarantee that they are being followed.
Just as management-based approaches can be mistakenly labeled as performance-based, so too can technologically prescriptive controls. Consider the effort to develop cybersecurity performance goals for critical infrastructure control systems. In July 2021, President Biden signed a National Security Memorandum on Improving Cybersecurity for Critical Infrastructure Control Systems instructing the Department of Homeland Security to lead the development of cross-sector control system cybersecurity performance goals. The current version of the Cross-Sector Cybersecurity Performance Goals (CPGs) Common Baseline is impressive, but, again, it could not be said to meet the criterion of performance-based regulation in the traditional sense. It addresses measurement, but only in terms of the existence of the specified controls, not in terms of how well they perform. Moreover, unlike the TSA directive, it has some provisions that are bluntly prescriptive in technical terms, such as the provision stating that automatic account lockout after five or fewer failed login attempts should be enabled on all password-protected IT and OT assets to reduce the risk of brute force attacks. Sometimes, a technologically prescriptive control is best.
Building a Mix of Regulatory Approaches for Cybersecurity
It’s time for cybersecurity regulators to break away from the prescriptive versus performance-based dichotomy. Instead, they need to draw upon all the modes of regulation available, considering sector-by-sector the nature of the threats, the characteristics of the industry, and the capabilities of the sector-specific regulator.
The task then becomes assembling the right mix. In the National Academies report, there is a description of the regulatory approach to physical aspects of pipeline safety that is very instructive. It illustrates how different approaches are better suited for different parts of a problem. For example, external corrosion is a risk for all steel pipes. To address this relatively well-understood threat, the U.S. Pipeline and Hazardous Materials Safety Administration (PHMSA) requires pipeline operators to install cathodic protection (generally involving the application of a low-voltage electric current to the pipeline), which is an established means of preventing external corrosion of pipe steel under a wide range of conditions. At the same time, a pipeline may fail when it is over-pressurized. The likelihood of such damage can be influenced by steel type, wall thickness, fabrication methods, and other design and technology choices. Instead of making these choices for pipeline operators, PHMSA regulations establish a formula for calculating a pipeline’s safe maximum operating pressure. A pipeline designer can adjust the choice of pipeline parameters, materials, and fabrication options.
A third type of risk faced by pipelines is inadvertent rupture caused by a farmer’s plow or by a backhoe during some unrelated construction. The potential for such third-party strikes varies according to context-specific factors. Among these are whether the pipeline passes near residential, agricultural, or industrial locations with different degrees of exposure to human activities that can damage buried pipes. Therefore, the agency requires operators to develop a customized damage prevention program with the understanding that the elements of the program—such as whether protective pipe casings will be installed, rights-of-way patrols will be deployed, or public awareness campaigns will be intensified—will reflect context-specific risk factors.
In this way, PHMSA prescribes a specific technology (cathodic protection) that can be applied uniformly across all regulated entities to address a risk that is well understood and common throughout the sector. It defines a measurable outcome (maximum operating pressure) for a problem that can be solved by multiple combinations of materials and design choices, and it uses a management-based approach (requiring regulated entities to adopt and implement written damage prevention management programs) to address a problem ill suited to either a technology-prescriptive or a performance-based approach.
So, too, can we structure cybersecurity regulation.
Comprehensiveness vs. Achievability, Tailoring vs. Harmonization
It is noteworthy that many of the controls in the cross-sector performance goals for critical infrastructure (CPGs) are lacking from the TSA directive. For example, the CPGs have a section on supply chain risk, a topic not covered at all in the TSA directive. Moreover, in the face of industry opposition, the TSA dropped some of the controls in the CPGs that were included in the initial directive. For example, the initial directive required the use of allowlisting to prevent unauthorized programs from executing, a control recommended in the CPGs, but the reference to allowlisting was dropped from the revised directive. At the same time, the revised directive has controls not mentioned in the CPGs, such as the capability to monitor or block connections from known or suspected malicious command-and-control servers. Some of this can be attributed to the fact that the CPGs are focused on OT systems, while the TSA pipeline directive had to cover both OT and IT.
But that does not explain all the differences. In fact, there is still no single set of cybersecurity controls that anyone seems comfortable applying across sectors or across regulatory bodies. There are now multiple collections of cybersecurity controls, including the ISO 27000 series, the catalog of controls known as SP 800-53 for government systems, issued and periodically updated by NIST, the parallel set of controls in NIST SP 800-171 for contractor systems, the rules adopted by New York state for financial services, the highly developed set of critical infrastructure protection standards for the bulk electric system overseen by the Federal Energy Regulatory Commission, controls listed in the settlements of the enforcement actions brought by the Federal Trade Commission, and the FTC’s revised rules for entities subject to its jurisdiction under the Gramm-Leach-Bliley Act. All share a considerable overlap, but there is wide divergence in terms of their content and granularity. To use just one crude comparison, while the revised TSA pipeline directive includes 36 controls by my count, the FTC rule has 26 controls, NIST SP 800-171 has 110 controls, and the control catalog spreadsheet for NIST SP 800-53 for government systems has 1,190 rows.
Thus, there appears to be wide divergence on how detailed a set of controls can be and still be considered comprehensive. And the divergence seen so far doesn’t seem to depend on the uniqueness of each industry’s risk posture or its sophistication in cyber defense. One approach is to prioritize controls or to tier them, with more controls mandated for larger organizations or for organizations facing higher levels of risk. (This has been a major focus of a complex and controversial effort at the Department of Defense known as the Cybersecurity Maturity Model Certification program, which aims to prioritize the controls of SP-171 and strike a balance between comprehensiveness and achievability.) As the Biden administration strives to adopt cybersecurity requirements for critical infrastructures, it has to be careful not to equate comprehensiveness with whatever it can get any particular industry to agree to.
The government and regulated industries will also have to grapple with the tension between tailoring and harmonization. In a previous Lawfare piece, I argued that, given technological and other differences among sectors, including different threat environments, tailoring of cybersecurity sector-by-sector will always be necessary. But excessive tailoring can lead to confusion, and it can also raise concerns that regulatory capture produces uneven requirements that are not justified on the merits alone. Rather than resisting all regulation, or seeking special dispensations sector-by-sector, critical infrastructure companies should welcome and participate in efforts to create a harmonized set of controls.
Distinguishing between inputs and outputs, between controls and performance, is crucial to a major project in the field of cybersecurity: determining what works. Improving cybersecurity will require analysis of the efficacy of technologically prescriptive rules versus those that are performance-based versus management-based regulation. Such an analysis would have to be linked to disaggregated aspects of the problem, just as the PHMSA applied different approaches to the subproblems of corrosion, over-pressurization, and unintentional third-party ruptures. Among other things, we need to better understand how different approaches impact business decisions, including boardroom considerations or other corporate factors affecting implementation. An analysis of efficacy will be much more likely to be meaningful if we can break out of the rhetorical box that treats prescriptive regulation as uniformly undesirable and that honors as “performance-based” anything that provides industry with flexibility.
As my colleague Andrew Grotto has argued, improving the nation’s cybersecurity posture requires much better quantification of risk and benefit. The establishment of a Bureau of Cyber Statistics, one of the few unfulfilled recommendations of the Cyberspace Solarium Commission, would help greatly. One mission of that bureau should be to gather incident-specific reporting (including reporting on attacks and near misses) and match it with information about vulnerabilities and defensive measures to draw generalizable knowledge about the risk-reduction benefit of various defensive or mitigative measures—technology-prescriptive, performance-based, and management-based. Perhaps then, with the ability to actually measure outcomes, we will be able to call regulations performance-based—and mean it.