Cybersecurity & Tech Surveillance & Privacy

Preemption of State Cybersecurity Laws: It’s Complicated

Jim Dempsey
Wednesday, August 24, 2022, 8:01 AM

The federal privacy bill currently being considered by the House of Representatives would be a huge improvement over the current state of law with respect to the cybersecurity of personal information, but a few key areas need adjustment.

iPhone data security lock (Book Catalog,; CC BY 2.0,

Published by The Lawfare Institute
in Cooperation With

The privacy bill awaiting consideration in the U.S. House of Representatives is indeed, as Peter Swire and many others have said, a monumental achievement in both its comprehensiveness and its nuanced approach. Swire is right that the bill will serve, justifiably, as the focal point for all future federal legislative efforts on consumer privacy. 

In addition to provisions on all the hot-button issues of privacy, the bill, H.R. 8152, contains cybersecurity language that deserves its own deep dive. My goal here is to explore the interactions among the bill’s definitions, its enforcement mechanisms, and its preemption of state law with respect to cybersecurity. 

Reasonable Data Security Under the American Data Privacy and Protection Act

The cybersecurity language in the bill as reported by the Energy and Commerce Committee on July 20 starts with a simple proposition: A covered entity or service provider shall establish, implement, and maintain reasonable administrative, technical, and physical data security practices and procedures to protect and secure covered data against unauthorized access and acquisition.” The language has a storied past, dating back to the 1974 Privacy Act, which requires federal agencies to “establish appropriate administrative, technical, and physical safeguards to insure the security and confidentiality of records [containing information about individuals] and to protect against any anticipated threats or hazards to their security or integrity.” The 1996 Health Insurance Portability and Accountability Act (HIPAA) requires covered entities to maintain “reasonable and appropriate administrative, technical and physical safeguards” to protect health data, and the 1999 Gramm-Leach-Bliley Act (GLBA) requires the financial services regulators to establish “appropriate standards ... relating to administrative, technical and physical safeguards” for customer information.

This background suggests one item for improvement: The bill should draw more comprehensively on the remarkably forward-looking language of the 1974 Privacy Act and include a reference to protecting the integrity and availability of data. Perhaps something like “reasonable administrative, technical, and physical data security practices and procedures to protect the integrity and availability of covered data and to secure it against unauthorized access and acquisition.” In this age of ransomware, given individuals’ heavy reliance on cloud services, integrity and availability are now consumer protection issues. If a lifetime’s worth of photos are rendered unavailable, it would be better not to have to argue over whether an attacker “accessed” or “acquired” them before encrypting or wiping them. (Both HIPAA and the GLBA require safeguards to protect the integrity of data, although neither expressly mentions availability.)

Like other existing cybersecurity laws and regulations, the proposed bill makes it clear that there is no one-size-fits-all solution. Instead, it recognizes that the data security practices required shall be appropriate to the size and complexity of the covered entity; the nature and scope of its collecting, processing, or transferring of data; the volume and nature of the data at stake; its sensitivity; the current state of the art in cybersecurity; and the cost of available cybersecurity tools in relation to the risks and nature of the covered data. Again, roughly similar language appears in HIPAA and in the safeguards rules for financial data adopted under the GLBA by the Comptroller of the Currency and other major bank regulators and, separately, by the Federal Trade Commission (FTC).

Who Assesses the Risk Assessments, and the Question of Enforcement

The bill goes on to say that covered entities (and service providers, but I’ll just refer to both here as “covered entities”) shall conduct a risk assessment to identify and assess any material internal and external risk to, and vulnerability in, the security of their systems. The bill then requires covered entities to take preventive and corrective action to mitigate any reasonably foreseeable risks or vulnerabilities they have identified.

In this regard, the draft bill encounters two problems at the core of all cybersecurity regulation. The first is the unavoidable centrality of risk assessment, specifically self-assessment of risk. Certainly, cybersecurity must be risk based, and the risk calculus will differ from entity to entity. But what if an entity is myopic or unduly optimistic in its risk assessment? Can it skimp on protections because it has lowballed the risk it faces? Of course, after a breach, a regulator can find that a risk assessment was inadequate. But what about before the breach?

This raises the second problem: enforcement. Who assesses a covered entity’s risk assessment? Does a regulator have to wait for a data disaster before acting?

Across the patchwork of cybersecurity regulation under current federal law, there are a variety of approaches, but in many cases sector-specific agencies have—and exercise—the power to examine the cybersecurity risk assessments and the choices that flow from them of entities under their jurisdiction before an incident occurs. In the financial services field, where regulators routinely and persistently inspect the entities under their jurisdiction, the Federal Financial Institutions Examination Council has specifically spelled out how bank examiners should assess a bank’s risk assessment, as a prelude to examining its overall cybersecurity posture. The initial directive issued by the Transportation Security Administration (TSA) after the ransomware attack on Colonial Pipeline required pipelines to submit a risk assessment to TSA for review. (It would be good to know what TSA did with those assessments and whether it sent any back for further work.) A second directive, revised in July of this year, requires pipelines to submit a cybersecurity implementation plan to TSA for its approval and to annually submit a cybersecurity assessment program. The Securities and Exchange Commission (SEC) conducts regular examinations of broker-dealers and investment firms and has made “identification and assessment of cybersecurity risks” one focus of those exams since at least its 2014 Risk Alert. In the medical devices bill that recently passed the House, cybersecurity risk assessment would be assessed by the Food and Drug Administration in the process of reviewing device approval applications. At the state level, examples of regulatory oversight of cybersecurity include the “safety and soundness examinations” conducted by New York State Department of Financial Services. And the California Consumer Privacy Act calls for rules requiring businesses to submit their risk assessments to the California Privacy Protection Agency (CPPA) and empowers the agency to conduct audits to ensure compliance with the act, which includes a reasonable cybersecurity provision similar to the one in the draft federal law.

In contrast, the FTC, which would enforce H.R. 8152, is relatively unique among regulatory agencies in exercising essentially no inspections or monitoring function. Its cybersecurity enforcement actions, however laudable, address only a tiny fraction of data breaches and occur only after the damage has been done.

The cybersecurity provisions of the privacy bill cover vastly more entities than are monitored by the financial services regulators or examined by the SEC. It would be completely unworkable for the FTC to review the risk assessment of every entity covered by the privacy bill. But, perhaps there is a middle ground, where the FTC could be granted some inspection authority that allows it, on some prioritized basis, to examine the risk assessment strategies of companies and thereby identify weaknesses and best practices As the bill moves forward, Congress may want to consult with administrative law experts for suggestions on how other agencies structure their inspections or examinations schedules.

Technology-Neutral Regulations?

The bill authorizes the FTC to promulgate regulations to establish processes for complying with the cybersecurity section. (Cybersecurity rulemaking was omitted from the full committee amendment in the nature of a substitute but was added back in by amendment.) The grant of explicit rulemaking authority to the FTC is welcome and probably necessary after West Virginia v. Environmental Protection Agency, but it may be unduly restrictive. The draft language says that the FTC may adopt “technology-neutral regulations.” To be sure, as a general principle cybersecurity regulation should focus on outcomes, not technologies. However, does technology-neutral mean that the FTC cannot require encryption of certain data, as it did for financial institutions under its jurisdiction in its recently revised safeguards rule for financial data under the GLBA (while leaving room for effective alternative compensating controls)? Or does it merely mean that the FTC cannot mandate use of a specific encryption standard? But even then, why not? The federal government puts considerable effort into the development of adequately strong encryption standards. Should the FTC not be able to endorse that work and say that data should be encrypted in accordance with standards set by the National Institute of Standards and Technology? Roll-your-own encryption is almost never a good idea. Does technology-neutral mean that the FTC cannot, in any context, require multi-factor authentication, another measure it required under its revised GLBA rule (also allowing the use of reasonably equivalent or more secure access controls)? After West Virginia v. EPA, the reference to “technology-neutral” may be an invitation to the courts to reject regulations they believe are too technology specific.

Instead of the seemingly rigid requirement for tech neutrality, one approach may be to draw upon or reference the language of 19 U.S.C.§ 2532(3), which directs agencies, in developing standards, to give preference where appropriate to performance criteria rather than design criteria.


Currently, while only five states have comprehensive consumer privacy bills, many states have cybersecurity laws—some broad, some sector-specific, some overlapping even within a state, comprising a crazy quilt of legislative judgments. And, critically, some go beyond the protection of personal data and interweave protections aimed at ensuring the integrity and availability of operations.

In many respects, the authors of H.R. 8152 have shown great care in crafting its preemptive effect on state law. The bill would preempt any state or local law or regulation “covered by the provisions of this Act, or a rule, regulation, or requirement promulgated under this Act” (emphasis added). In 1993, in CSX Transportation, Inc. v. Easterwood, the Supreme Court explained that “covering” is a “more restrictive term” than “related to,” and that federal law will accordingly “cover” the subject matter of a state law only if it “substantially subsume[s]” that subject. On top of that, § 404(b)(2) of the federal bill contains a list of 19 categories of state law that would not be preempted, including the data breach notification laws of all 50 states plus the District of Columbia, Guam, Puerto Rico, and the Virgin Islands.

State Reasonable Security Laws

To begin with, 23 states have cybersecurity laws requiring, in language similar to H.R. 8152, that businesses protect personal information with reasonable security measures. H.R. 8152 would preempt “any law, regulation, rule, standard, requirement, or other provision having the force and effect of law of any State, or political subdivision of a State, covered by the provisions of this Act, or a rule, regulation, or requirement promulgated under this Act.” This would preempt those 23 state laws pretty clearly, with a few caveats, which I describe below. Under § 402 of the bill, however, the attorneys general of those states would gain the right to enforce the federal reasonable security language, so the availability of state government enforcement would not suffer under preemption. (The current text of the bill also authorizes the California Privacy Protection Agency to enforce the federal law, although, for the CPPA to exercise that authority, state legislative action would be needed.) And, of course, for the 31 states and territories that do not have reasonable security laws, their attorneys general would gain new authority and their citizens would gain new protection.

Private Right of Action

The same is true of private right of action. Section 403 of H.R. 8152 grants a private right of action. Only some of the state cybersecurity laws (California notable among them) allow a private right of action. H.R. 8152 would create a nationwide cause of action (extending, of course, only as far as allowed under the constitutional requirement of standing).

The private right of action under H.R. 8152, however, starts two years after its effective date, but preemption, as stated in § 408, would occur 180 days after enactment of the federal law. This means that, for the states with cybersecurity laws allowing a private right of action, there would be a gap of a year and a half in which residents currently able to bring private rights of action would not be able to do so.

Definition of Personal Information

In one key respect, the all-crucial definition of personal information, the proposed federal law is broader than the state reasonable security laws. In line with the more modern understanding of the risks of data analytics and profiling, the House bill has a broad definition of covered information: “The term ‘covered data’ means information that identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to an individual, and may include derived data and unique identifiers.”

In contrast, many of the state cybersecurity laws have a narrower, outdated definition of personal information, probably derived from their initial purpose of protecting the data elements that are the building blocks of identity theft. The Illinois law, for example, defines personal information as “[a]n individual’s first name or first initial and last name in combination with any one or more of” certain listed data elements, including Social Security number, driver’s license number, medical or health insurance information, account number or credit or debit card number, unique biometric data, or a “user name or email address, in combination with a password or security question and answer that would permit access to an online account.” Laws requiring reasonable security measures in Delaware and Florida are similar, the definition in Alabama law is somewhat narrower, and those in the New York and Texas laws are narrower still. These laws contain nothing covering cookie data, location tracking information, IP addresses, cell phone numbers, or any of the myriad other kinds of digital data now used to track behavior and profile individuals. So the federal reasonable security requirement would be a significant expansion over obligations under state statutes with a narrow definition of personal information.

Sector-, Data-, or Technology-Specific State Laws

In addition to the 23 reasonable security measures laws, there are multiple other state laws on cybersecurity. According to one count, 21 states have adopted the model cybersecurity law drafted by the National Association of Insurance Commissioners, specific to regulated insurance companies. Maine and Minnesota have reasonable security measures laws specifically for internet service providers. Colorado and Vermont have cybersecurity regulations (differing in scope) for investment professionals. Other state laws protect medical data, biometric data, or data generated by tools and services used in K-12 education.

The § 404(b)(2) exceptions to the preemption language in H.R. 8152 would preserve some of these state cybersecurity laws. For example, California, Vermont, and Iowa have cybersecurity laws on educational technology, and § 404(b)(2)(C) of H.R. 8152 would preserve state laws “that govern the privacy rights or other protections of … students, or student information.” Likewise, California Civil Code 1798.81.6, which imposes patching and other vulnerability management obligations on consumer credit reporting agencies and their data processing service providers, would probably be saved by § 404(b)(2)(K), which preserves “[l]aws that address … credit reporting and investigations.” Section 404(b)(2)(S) would preserve the Nevada law requiring use of encryption to protect personal information. Finally, while the federal bill does not cover certain job applicant or employee data, § 404(b)(2)(C) would specifically preserve state laws “in so far as” they protect employee data. (There is a typo in § 404(b)(2)(C) that should be fixed.) By my reading, all 23 of the state reasonable security laws cover all job applicant and employee data.

State and Local Government Agencies

By my count, 13 of the state reasonable security laws apply not only to businesses but also to state and local government entities. (In some cases, the state laws directly apply to government agencies, while in other cases, “business” is defined as including state and local government agencies.) The federal bill expressly does not cover state and local governments, so given the narrowness of the “covered by” language in the preemption clause of the federal bill, the state reasonable security laws should continue to remain in effect to the extent that they impose an obligation on state and local agencies.

Operational Security Versus the Security of Personal Information

Preemption poses an especially vexing question with respect to state cybersecurity laws that have the dual purpose of protecting personal information and ensuring the availability and integrity of information systems. A prime example is the set of cybersecurity regulations adopted by the New York State Department of Financial Services. The introduction to the regulations expressly states that they aim not only to protect personal data against unauthorized access but also to ensure the integrity and availability of information technology systems. Under the New York regulation, an information system is broadly defined as any “discrete set of electronic information resources organized for the collection, processing, maintenance, use, sharing, dissemination or disposition of electronic information, as well as any specialized system such as industrial/process controls systems, telephone switching and private branch exchange systems, and environmental control systems.” While the federal bill preserves “[l]aws that address banking records,” the New York regulations apply to systems that contain no records at all and they cover insurance companies as well. Covered entities are required, for example, to report to the superintendent of financial services cybersecurity events, defined as “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an information system or information stored on such information system,” including events “that have a reasonable likelihood of materially harming any material part of the normal operation(s) of the covered entity.”

Likewise, the model law for the insurance sector requires insurance companies to “develop, implement, and maintain a comprehensive written information security program … that contains administrative, technical, and physical safeguards for the protection of nonpublic information and the licensee’s information system” (emphasis added). Covered insurance companies are required to consider, for example, adoption of audit trails sufficient to support normal operations and obligations of the licensee.

The operational technology of the insurance and banking sectors, the integrity of their transactions, and the stability of their data systems from a purely functional standpoint are concerns not addressed by H.R. 8152. Perhaps the “covered by” language is already sufficient to protect state cybersecurity laws that extend to these subjects, but additional clarity may be prudent. The easiest approach may be to add to the list of non-preempted subjects in § 404(b)(2) something about state laws aimed at the security of operational technology and the accuracy and traceability of financial transactions.

In somewhat the same vein, California and Oregon have cybersecurity laws for connected devices that require security measures designed to protect the device and any information contained therein. The “Internet of Things” (IoT) is a very broad concept, encompassing a wide range of consumer and industrial devices. The risks associated with IoT devices go well beyond the protection of personal information. For example, IoT devices have been harnessed to launch denial of service attacks, and there is even concern that consumer devices can provide an avenue for attack on corporate systems. In the absence of federal action, it would probably be unwise to preempt states from addressing IoT security. Section 404(b)(2)(G) preserves civil laws governing “unauthorized access to information or electronic devices,” but that might not be enough.

Defining Reasonable Security

Among the 23 states requiring reasonable security for personal information, a handful have already defined what reasonable security means. New York did that in its statute, California did it by referencing the critical security controls of the Center for Internet Security, and Massachusetts did it by regulation. Some states have adopted safe harbor rules that define reasonable security by reference to industry standards. Almost all of the other states have implicitly defined reasonable security through their enforcement actions where, like the FTC, they have alleged that the failure to take specific security measures (patching, for example, or password management) was a failure to offer reasonable security.

Given that some states have already specified what is reasonable security, the timing of preemption could be disruptive. The federal bill says that the FTC may promulgate cybersecurity regulations, but there is no mandate to do so, and no deadline, yet the preemption takes effect 180 days after enactment of the federal law and preempts any “regulation, rule, standard, requirement, or other provision having the force and effect of law.” Until the FTC adopts regulations defining reasonable security, states should be allowed, in enforcing the federal law, to pursue their interpretation of what is reasonable. To some extent, state latitude in defining reasonable security is inherent in the bill’s grant of enforcement powers to state attorneys general, but it may be useful to clarify that states are able to rely on their prior work at least until the FTC issues regulations with a clear definition of the term.


Overall, viewing these issues from a national perspective, the federal bill would be a huge improvement over the current state of law with respect to the cybersecurity of personal information. One important missing element, however, is the enforcement of cybersecurity practices before disaster strikes. Policymakers should consider whether there is some way for the FTC to exercise the same kind of monitoring or inspections authority seen under other cybersecurity structures. And a few thorny issues need to be sorted out regarding preemption. Key among them are the preservation of state laws that go beyond the protection of personal information and address the operational integrity of systems and the preservation, at least until the FTC issues rules, of the work of those states that have already undertaken the definition of what is “reasonable” security.

Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare