Cybersecurity & Tech

Enforcement of Cybersecurity Regulations: Part 2

Jim Dempsey
Friday, March 24, 2023, 12:29 PM

While a valuable part of a cybersecurity program, “third-party audits” are too often not audits and not done by true third parties.

Equifax suffered a major data breach in 2017, despite annual third-party certifications of its information security. (Tyler Lahti, https://tinyurl.com/2p8yxmue; CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's Note: This is the second in a three-part series on cybersecurity enforcement. Click here for Part 1. Part 3 will be published in the coming days. 


Across many issues and sectors, one tool for enforcing government standards is to require periodic external audits of regulated entities. As a 2012 study for the Administrative Conference of the United States found, “[T]hird parties are charged with assessing the safety of imported food, children’s products, medical devices, cell phones and other telecommunications equipment, and electrical equipment used in workplaces” and with ensuring that “products labeled as organic, energy-efficient, and water-efficient meet applicable federal standards.” Third-party programs have been seen as desirable because they extend the reach of regulators whose resources are limited, shifting some regulatory costs to private parties and thereby conserving governmental resources. 


However, misunderstandings about third-party audits have been the downfall of many regulatory systems, from the building codes of New Zealand to the egregious failure of Enron’s auditors.


Not Really Third Party


In the context of assessing compliance with standards (whether financial, manufacturing, or cybersecurity), the term “third-party” is often misused. An internal audit is a first-party audit. A second-party audit is performed by or on behalf of an entity in a commercial relationship with the audited entity, such as when a clothing brand audits the factories it buys garments from for compliance with labor and safety laws. (These audits can be quite strict, because the reputation of the brand is on the line.) Strictly speaking, a third-party audit is conducted by an external auditor with no interest in the cost, timeliness, or outcome of the audit. A true third-party auditor is not paid by the auditee. Where the auditor is chosen and paid by the auditee, there may be little difference in incentive structure between an employee and a contractor. Many third-party audits, therefore, should really be thought of as first-party. Still, labeling any external audit as “third-party” persists.


In a 2002 article, business school professors Max H. Bazerman and Don A. Moore and economist George Loewenstein outlined why external auditors selected and paid by the audited entity often perform bad audits. It begins with “attachment bias”—the internalized concern of auditors that “client companies fire accounting firms that deliver unfavorable audits.” Other factors identified by Bazerman and his colleagues are remarkably pertinent to auditing in the cybersecurity context. For one, they concluded that bias thrives in a context of ambiguity. In the cybersecurity context, where outcomes are by and large unmeasurable, security is inherently risk based and contextual, leaving a lot of room for ambiguity. Throw in the fact that auditors may hesitate to issue critical audit reports because the adverse consequences of doing so—damage to the relationship, potential loss of the contract—are immediate, while the costs of a report glossing over deficiencies—the chances of a breach occurring due to defects that were not called out and remediated—are distant and uncertain, and you have a recipe for overly generous assessments.


Experience seems to confirm these doubts with “third-party” audits in the cybersecurity arena. Twitter, according to whistleblowers, never complied with the improvements ordered by the Federal Trade Commission (FTC), even though it obtained, as required by the commission in 2011, biennial cybersecurity assessments by a “qualified, objective, independent third-party professional.” As I noted in Part 1 of this series, those assessments found that Twitter’s security was sufficient even as it was suffering a major breach. Starting in 2011, a unit of Ernst & Young annually audited  Equifax and certified it compliant with information security standards issued by the  International Organization for Standardization. In 2017, Equifax suffered an enormous breach; only after that was its certification revoked. And Target experienced a massive breach of credit card information in November 2013, just two months after it was audited and certified as meeting the security standard of the payment card industry. 


Not Really Audits


There’s a second problem with third-party audits: They’re often not audits. As both Chris Hoofnagle and Megan Gray have pointed out, the FTC has consciously used the word “assessment” to describe the third-party reviews it requires of entities settling privacy or security investigations. As Hoofnagle explains, “Assessment is a term of art in accounting wherein a client defines the basis for the evaluation, and an accounting firm certifies compliance with the client-defined standard.” Moreover, an assessor can rely on statements from the company regarding its practices rather than verifying that the practices are actually being implemented effectively. In the FTC’s early data security orders, this produced what Hoofnagle called an “echo chamber,” in which the company defined what security measures were adequate for itself and the assessor then found—based on the claims of corporate officials—that those controls were adequate.


To some extent, the FTC has responded to these criticisms. Its settlement with the online alcohol sales platform Drizly earlier this year still requires only an “assessment,” but it specifies that the assessor’s report must identify specific evidence—“including, but not limited to, documents reviewed, sampling and testing performed, and interviews conducted”—that was relied on in the course of assessment. Moreover, the commission’s order stipulates that no finding of any assessment shall rely primarily on assertions or attestations by the corporate respondent’s management. Inexplicably, however, the Drizly order, like others before it, requires that only the initial assessment be submitted to the FTC. All subsequent biennial assessments over the 20-year life of the order only need be provided to the commission upon request. (As noted in Part 1 of this series, the Transportation Security Administration’s directives for pipelines and railroads take the same approach: Assessment reports must be supplied to the government only if requested.)


Many have offered suggestions to improve the integrity of third-party monitoring. Law professor Lesley McAllister, who wrote the study for the Administrative Conference on third-party enforcement, emphasized that the government must actively oversee any third-party verification system, starting with creating and running a process to select and approve the third-party auditors. Moreover, she warned that, in the absence of objective standards, the risk of unreliability and inconsistency in the determinations of third parties becomes higher—a point especially relevant in the context of cybersecurity performance-based regulation where there may be no measurable standards. Professors Jodi Short and Michael Toffel offered other approaches, including term limits on client-monitor relationships, transparency of monitoring results (probably infeasible in the cybersecurity context, since it could offer a road map to attackers), and training designed to promote objectivity and consistency. Their lead recommendation—paying monitors through a common fund to which all monitored entities would be required to contribute—would require either legislation or extraordinary cooperation among members of a given sector, but maybe it’s not too far-fetched.


None of this should be taken as disparaging the ecosystem of firms and experts offering cybersecurity audits. As one study concluded, such assessments can foster “organizational learning”—that is, they can help an entity understand and address its vulnerabilities, assuming it wants to do so. As the National Institute of Standards and Technology noted in its 2022 special publication “Assessing Security and Privacy Controls in Information Systems and Organizations,” “Control assessments facilitate a cost-effective approach to managing risk by identifying weaknesses or deficiencies in systems, thus enabling the organization to determine appropriate risk responses in a disciplined manner that is consistent with organizational mission and business needs” (emphasis added). In other words, like self-assessments, external audits performed by contractors chosen by the audited entity can contribute immensely to the entity’s cybersecurity program. But they should not be confused with enforcement of government-mandated standards.


Humility, Incrementalism, and Experimentation


In her 1997 book on compliance, Bridget Hutter, professor of risk regulation at the London School of Economics, offered a still relevant summary of regulatory enforcement: Compliance is as much a process as an event. Regulators, she wrote, “are typically … in long-term relationships with the regulated, whose compliance is of ongoing concern.” The relationship is “reflexive, with each party adapting and reacting to the moves” of the other. The tools available to regulators are not only legal action, but education, advice, persuasion, and negotiation. Enforcement is serial and incremental. Full compliance is short lived.


These points seem especially true of cybersecurity, where so much remains unknown about effectiveness. Regulators and the regulated alike should recognize that initial efforts will not be perfect. Instead, experimentation with various methods of enforcement should be matched with data collection and analysis to assess effectiveness.


In Part 3 of this series, I consider the need for government inspection of critical infrastructure cybersecurity. Spoiler alert: Government supervision need not be as scary as it sounds, and it is already widespread, including for cybersecurity in some sectors


Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare