Cybersecurity & Tech

Enforcement of Cybersecurity Regulations: Part 1

Jim Dempsey
Tuesday, March 21, 2023, 10:18 AM

As government policy moves toward more binding rules for cybersecurity, how should they be e

Federal Trade Commission Headquarters in Washington, D.C. FTC cybersecurity orders have been hampered by their lack of enforcement. (Carol M. Highsmith, https://tinyurl.com/2p96j2k8; Library of Congress, Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's note: This is the first in a three-part series on cybersecurity enforcement. Click here for Part 2. Part 3 will be published in the coming days. 

A key pillar of the Biden administration’s recently released cybersecurity strategy is the establishment of mandatory requirements for critical infrastructure. Noting that it had already issued binding directives for pipelines and railroads, the administration promised more for other sectors using existing law (a promise delivered on days later with a directive to the aviation sector and with action on drinking water systems), and it pledged to work with Congress to fill gaps in statutory authorities for other sectors.

This fundamental shift in policy raises a critical question: How can the government enforce such requirements? Experience across the regulatory landscape offers some cautionary lessons.

Consider Twitter. Among the many damning claims made last year by former Twitter security lead turned whistleblower Peiter Zatko was that the company had never complied with the 2011 Federal Trade Commission (FTC) order directing it to improve its data security. Indeed, among many other deficiencies cited by Zatko, he alleged that Twitter never fixed the very flaw—granting over half its employees administrative access—that the commission had called out in its initial complaint against the company in 2010. (A second Twitter engineer came forward more recently with a similar claim.) As of 2021, according to Zatko, still over half of Twitter’s 8,000 employees had privileged access to production systems and sensitive user data. This meant that the entire company could be compromised if any one of those workers fell for an attack—which in fact someone did in 2020, causing what cybersecurity authority Dimitri Alperovitch said at the time was the worst hack ever of a major social media platform.

One of the most remarkable aspects of this tale is that Twitter, as required by the 2011 FTC order, has been undergoing every two years an assessment of its cybersecurity practices by a supposedly independent third party, which concluded that Twitter’s security was just fine. Indeed, the assessment report for 2019 through 2021, the very time when the company suffered that major breach, stated that Twitter’s security controls met or exceeded the FTC’s requirements and operated throughout the reporting period “with sufficient effectiveness to provide reasonable assurance to protect the security, privacy, confidentiality, and integrity of non-public consumer information.”

The FTC is not the only agency whose approach to cybersecurity enforcement has failed. In 2016, the Department of Defense began including a clause in solicitations and contracts requiring contractors to self-certify that they were in compliance with the security provisions of the National Institute of Standards and Technology (NIST) Special Publication 800-171, which lays out a comprehensive set of cybersecurity controls for contractor systems handling unclassified information. But in a CyberSheath/Merrill Research survey last year, many contractors admitted they were lacking basic controls. NIST SP 800-171 requires, for example, that entities use multifactor authentication (MFA) for local and network access to privileged accounts and for network access to nonprivileged accounts, but only one in five respondents had any form of MFA—despite self-certifying compliance or promising they would implement the practice.

Just as there are multiple approaches to regulation, there are multiple methods of enforcement, including self-certification, “third-party audit” (more on my use of quotation marks in a later piece), mandatory submission of compliance plans (with or without requiring government review and approval), government inspections or other forms of supervision or monitoring, post hoc case-by-case actions by regulators, and private litigation. All have been tried across the regulatory landscape, many in the cybersecurity field. Each has its limits and each has a potential role to play in cybersecurity. As the Defense Department and Twitter examples show, self-assessments, self-certification, and third-party audits are unlikely to succeed unless backed up with additional layers of accountability. Ultimately, strengthening America’s cybersecurity posture will probably require an expansion of the government supervision model.

In this regard, the recent directives from the administration are likely not the last word on cybersecurity enforcement. Overall, the course of cybersecurity regulation in coming years should be guided by a spirit of experimentation and should strive for measurements correlating inputs (in terms of both substantive regulation and enforcement tools) to outputs (in the form of measurably better security).

For Any Regulatory Approach, Enforcement Matters

In an earlier Lawfare piece, I explored three different approaches to cybersecurity regulation: means-based rules that prescribe specific technologies or solutions, performance-based rules that define outcomes but leave the means to the regulated entity, and a management-based approach that requires adoption of practices such as conducting a risk assessment or adopting access-management policies. A fourth type of regulation—mandatory information disclosure—has already become a major theme in U.S. cybersecurity policy in recent years, beginning with state data breach notification laws and recently a Securities and Exchange Commission proposal that would require publicly owned companies to disclose incidents and risks to the investing public, as well as the new law requiring critical infrastructure providers to report incidents to the federal government. Cybersecurity, like other fields, will require a hybrid strategy that combines all of these approaches.

Regardless of the form it takes, any regulatory system is only as good as its enforcement. As University of Pennsylvania law professor Cary Coglianese has explored—and a 2018 National Academies study agreed—performance-based rules are meaningless without enforcement. Notably, enforcement of performance-based rules is especially difficult when the regulator must predict outcomes. For example, performance-based fire codes require regulators to predict the performance of individual structures before a fire occurs. However, until a fire occurs, it is difficult—if not impossible—to measure the performance of a building; after a fire, there’s ample data, but it’s too late to matter. The same may be true of some performance-based cybersecurity goals. For example, there is really no way to know if a performance standard such as “protect organizations from automated, credential-based attacks” (the first outcome specified in the administration’s cross-sector cybersecurity performance goals) is actually being implemented. Up until the moment of a successful attack, it seems the desired outcome is being achieved; a minute later, it is clear that it wasn’t. Additionally, as two U.K. commentators warned in 2007, outcomes-based regulatory approaches may actually yield spasms of overzealous enforcement after an incident, especially one that becomes politically charged.

Enforcement is also key to the management-based regulation that has dominated the FTC’s enforcement actions and that features prominently in the rules for pipelines and railroads issued by the Biden administration. Under this approach, regulated entities are required to assess their risks and adopt measures to mitigate them. Faced with such an open-ended mandate, some firms can be expected to minimize their vulnerabilities and to create plans that limit their implementation costs. “Regulators,” Coglianese has concluded, “therefore need to be able to assess whether firms’ planning has been adequate and monitor whether firms are following their plans … by imposing suitably detailed record-keeping requirements and instituting inspections or third party audits.”

In considering what should be next for cybersecurity enforcement, I want to put aside the two approaches that so far have dominated with regard to the protection of consumer data (other than financial data held by banks): post hoc case-by-case investigations by regulators and private litigation in class actions. While both should remain part of the mix, neither one is systematic or forward-looking. Taking place after a breach, they can single out the one mistake the attackers exploited and in doing so often lose sight of the overall reasonableness of the victim’s security program. In many cases, remedial action does not come until years after the incident. And because administrative enforcement actions and private litigation almost always settle with no admission of wrongdoing, they fail to offer industry any generalizable certainty on what is required.

The cybersecurity enforcement ecosystem must be augmented with other approaches, which I explore in three articles. In this article and the next one, I discuss approaches that have been tried and found lacking: self-certification and third-party auditing. Both, despite their drawbacks, will quite likely be part of the cybersecurity mix, so it is necessary to acknowledge their shortcomings to understand where and how they can be deployed with positive effect.

Self-Assessment and Self-Certification: Internally Useful but Not Really Enforcement

Experience with cybersecurity self-assessment as an enforcement tool is generally dismal, as illustrated by the Defense Department’s efforts to impose security controls on its suppliers. Insiders acknowledge that many defense contractors, large and small, are not fully compliant, despite attesting they were or promising to become compliant in their contract documents. Many contractors rely on these plans of action and milestones, but there is little indication that they are enforced. A June 2022 Defense Department memorandum reminded contract officers to use the tools at their disposal to spur implementation. However, as one commentary noted with lawyerly understatement, “[T]he government has had low visibility regarding [contractors’] actual implementation.”

Findings from the Department of Defense Office of the Inspector General confirm the weakness of self-certification. In 2019, the inspector general issued a report on implementation of the SP 800-171 controls. Of the 10 contractors assessed, seven did not enforce the use of multifactor authentication to access their networks and systems; seven did not configure their systems to enforce the use of strong passwords; two did not identify network and system vulnerabilities; and six did not mitigate network and system vulnerabilities in a timely manner. In 2022, the inspector general issued a new report, reviewing the practices of 10 academic and research contractors and found similarly widespread shortfalls. Again, the contractors in both studies had certified in their contracts that they were fully compliant or had a plan to become so. (The Defense Department is now seeking a more effective path forward, with a system called Cybersecurity Maturity Model Certification, or CMMC 2.0. The new system will require progressively advanced levels of accountability, with third-party assessments for mid-tier contracts and government-led assessments for the most sensitive contracts.)

Despite this experience, recent cybersecurity mandates from the Biden administration continue to rely on self-assessment. The revised Transportation Security Administration (TSA) directive for pipelines requires them to develop a self-assessment program for their critical cyber systems to ascertain the effectiveness of cybersecurity measures and to identify and resolve device, network, and/or system vulnerabilities. The directive for railroads is the same. Additionally, a March 3 memorandum from the Environmental Protection Agency (EPA) to state drinking water administrators also allows self-assessments as one option for the first step in the process of reviewing the cybersecurity of public water systems. The Department of Energy’s Cybersecurity Capability Maturity Model likewise depends on self-assessment and self-certification.

These mechanisms cry out for oversight. The TSA pipeline and railroad directives require regulated entities to submit their self-assessment plans to the TSA, but they don’t actually require the companies to submit the results of their assessments. Instead, they merely say that the TSA can request the results. (The EPA memo, however, does require water systems to submit their assessments to their state regulators.) Overseers in Congress and the White House should find out if the TSA has indeed seen the assessment results and done anything about them. Assessments that found full compliance should be viewed skeptically, to say the least.

Self-assessment is a crucial element—indeed, the starting point—for any cybersecurity program. Every entity should be regularly assessing its cybersecurity risk and the effectiveness of its cybersecurity practices. In this regard, the TSA in its pipeline and railroad directives and the EPA in its drinking water memo are right on track: Risk assessment is clearly a first step in developing a cybersecurity program. But no one should confuse self-assessment with enforcement.

Self-assessment may have a role to play in enforcement in two contexts. One is where there is an effective complaint system that brings potential vulnerabilities to the attention of the regulator. This may exist, for example, in the field of automobile safety. Under federal law, the National Highway Traffic Safety Administration (NHTSA) issues binding motor vehicle safety standards, but the agency does not preapprove motor vehicles or technologies. Instead, the NHTSA’s organic statute creates a self-certification system of compliance, in which manufacturers certify that their products meet applicable standards. The NHTSA does some sample testing for compliance, but to prioritize its enforcement efforts in a risk-based approach it relies heavily on flows of data, including consumer complaints. In 2019, for example, the NHTSA received 75,267 consumer complaints, 32,482 of which required further substantive review, resulting in 966 recalls involving 38.6 million vehicles and 14.4 million pieces of equipment belonging to 53 million people.

There’s nothing comparable in the cybersecurity field. Bug bounty and vulnerability disclosure programs generate valuable data, but it remains with the individual software developer or system operator and is not available for identification of cross-sector trends. For better or worse, the 2022 legislation that will require critical infrastructure providers to report cyber incidents (the Cyber Incident Reporting for Critical Infrastructure Act) includes a provision stating specifically that information submitted thereunder may not be used to regulate, including through an enforcement action, the activities of any covered entity. In contrast, the NHTSA receives multiple mandated reports from manufacturers, which feed into its enforcement and investigations prioritization process. (In 2019, for example, the NHTSA received more than 125,000 reports from manufacturers regarding potential defects.) 

One possible source of input to support a cybersecurity self-certification process is incident reporting required by the sector-specific agencies under authority separate from the critical infrastructure law. The TSA directives, for example, require both pipelines and railroads to report incidents. This data, however, will have two limitations: In focusing on incidents, the reporting is mainly backward-looking, and it will not include vulnerabilities. The less an entity knows about its security posture, the less reporting it will have to do.

Another context in which self-certification may work is where there is an effective whistleblower system in place. For example, where government contractors are required to certify compliance with cybersecurity standards as part of the procurement process, failing to deliver on those assurances is actionable under the False Claims Act (FCA). Moreover, the act allows a whistleblower to get a share of any money recovered from the contractor. This can be a huge incentive for engineers and other insiders whose complaints about inadequate cybersecurity are brushed aside. But cases take years, outcomes are uncertain, and being a whistleblower carries a huge personal cost. Whistleblowers may find little encouragement in the recent Department of Justice report showing that, while numbers of FCA cases increased slightly in 2022, recoveries decreased. The report cited only one FCA cybersecurity case in all of 2022, despite the department’s high-profile October 2021 announcement of an FCA cyber-fraud initiative. Even more troubling, there are reports that the government continues to be quite aggressive in forcing dismissal of cases it chooses not to claim for itself. And in any case, the act is not available against critical infrastructure entities supporting the private sector and the general public. All in all, this is not encouraging for enforcement of self-certifications in the cybersecurity field.

Self-assessment and self-certification are clearly not sufficient for enforcing cybersecurity regulations. In the next installment in this series, I’ll consider another approach that has been relied on heavily, “third-party audits,” and I’ll explain why the phrase should be written with cautionary quotation marks.


Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare